# Archive for July, 2013

## Logic without Existential Import or Free Logic

Posted by allzermalmer on July 31, 2013

Aristotle’s Logic
[(x)(F(x) → G(x)] → Ε(x)(F(x) & G(x))

Modern Logic
[(x)(F(x) → G(X)) & Ε(x) (F(x))] → Ε(x)(F(x) & G(x))

Modern Logic can discriminate between inferences whose validity requires an existence assumption or doesn’t require an existence assumption.

Required Existence Assumpition
[(x)(F(x) →G(x)) & Ε(x) (F(x))] → Ε(x)(F(x) & G(x))

Non-Required Existence Assumption
(x) F(x) → Ε(x) F(X)

When move from Quantification Theory to Identity Theory, Modern Logic’s new formula doesn’t hold with Identity Theory, because there is a counter example.

Assume that [(x)(F(x) → G(X)) & Ε(x) (F(x))] → Ε(x)(F(x) & G(x))
Assume that F=y
then (x) [x=y → G(x)] → [Ε(x) (x=y & G(x))]

So Modern Logic Quantification Theory view of Existential Import would imply that all statements in Identity Theory of the form (x) [x=y → G(x)] carry Existential Import of Ε(x) (x=y & G(x)).

The source of this error in Modern Logic comes from Particularization.

(1) (x) [x=y → G(x)] → [Ε(x) (x=y & G(x))]
is deducible from valid formula
(2) [(x)(F(x) → G(X)) & Ε(x) (F(x))] → Ε(x)(F(x) & G(x))
by substituting “=y” for “F” and detaching with
(3) (E(x) (x=y)
(3) is valid in conventional Identity Theory and deducible from Identity Axiom
(4) y=y

Particularization is [Fy → (E(x)) (x=y)]

So Free Logic comes about to get rid of the Existential Importation that both Aristotle’s and Modern Logic allow for. This is a logic Free of existential import or assumption. It is built off of a modification of Quantification Theory by altering some axioms.

(x)(F(x) → F(y)) is an axiom of Quantification Theory which is replaced in Free Logic.

A1) (y)(x) (F(x) → F(y))
A2) [(x)(F(x) → G(x))] → [(x)(F(x) → (x)(G(x)]
A3) x=x
A4) (x=y → (F(x) → F(y))
A5) (E(x)(F(x) → F(x))

From these axioms, (x) F(x) → E(y) F(y) isn’t derivable. This means that existential import is not derivable from the axioms of the Free Logic in Quantifier Theory.

However, what can be derived is [F(x) & (E(x) (x=y)) (x)] → (E(y) (F(y)).

(x) [(F(x) → G(x)) → ((E(x) F(x)) → (E(x)G(x)))] can also be derived from the system.

(X) (x=y) can’t be derived and neither can [(x)(x=y → G(x))] → [(E(x)(x=y & G(x))].

This Free Logic allows the differentiation between singular inference patterns where the existence assumption is relevant or not.

Singular Inference Patterns Existence Assumption:
(F(x) & (E(x) (x=y & G(x))) infer (E(x) F(y))

Singular Inference Patterns Non-Existence Assumption:
x=y infer (F(x) → F(y))

## Impossibility Paradox of Computers by Curry Paradox

Posted by allzermalmer on July 29, 2013

There was a paper called “Computer Implication and the Curry Paradox”. It was authored by both Wayne Aitken and Jeffery A. Barrett, which appears in Journal of Philosophical Logic vol. 33 in 2004.

Suppose an Implication Program has two input statements that are about the behavior of the program, then it tries to deduce the second statement from the first statement by some specified rules in it’s library.

If program finds a deduction of the second statement from the first statement, then the program halts and has output of 1 to signal proof has been found.

The Implication Program can prove statements involving the Implication Program itself.

It is assumed throughout the paper that programs are written in a fixed language for a computer with unlimited memory.

The Impossibility Theorem basically states that “no sufficiently powerful implication program can incorporate an unrestricted form of modus ponens.”

One of the consequences of this Impossibility Theorem is that “modus ponens is an example of a valid rule of inference that can be defined algorithmically, but cannot be used by the implication program.”

Assume (1) that property C(X) is defined to hold if and only if X having property X implies Goldbach’s Conjecture. Furthermore, suppose (2) that C (C). Thus, by the definition of C (X) it implies that C (C) implies Goldbach’s Conjecture. Since C (C) is true by assumption, then it follows by Modus Ponens that Goldbach’s Conjecture is true.

This doesn’t prove Goldbach’s Conjecture yet. However, it does prove that C (C) implies Goldbach’s Conjecture. So by the definition of C (X), it follows that C (C) is true. And by the use of Modus Ponens, Goldbach’s Conjecture is true.

A statement is a list, i.e.  [prog, in, and out]. Prog is a program considered as data, and in is an input for prog, and out is an anticipated output.

A statement is called true if the program prog halts with input in and output out. A statement that isn’t true is called false.

“There is a program to check if but not to test whether [prog, in, out] is a true statement. Given [prog, in, out] as an input, it ﬁrst runs prog as a sub-process with input in. If and when prog halts, it compares the actual output with out. If they match then the program outputs 1; if they do not match, the program does something else (say, outputs 0). This program will output 1 if and only if [prog, in, out] is true, but it might not halt if [prog, in, out] is false. Due to the halting problem, no program can check for falsity.”

So there is a program that check whether [prog, in, out] is a true statement, but the program can’t test whether [prog, in, out] is a true statement. From the Halting problem, no program can check for falsity. So the program can’t check for it’s own falsity, but can check for it’s truth.

It will use 1 as a signal of positive result, and 0 to signal a negative results. However, failure to halt also indicates, but is not a signal to a real use,  a negative result. So failure of the program to halt doesn’t signal a negative result, but it does indicate a negative result.

“A rule is a program that takes as input a list of statements and outputs a list of statements…A valid rule is a rule with the property that whenever the input list consists of only true statements, the output list also consists of only true statements.”

The output list will include the input list as a sublist, and that the rule halt for all input lists.

Take the program AND. This specific program expects as an input a list of two statements. These two statements are [A,B]. The AND Program first checks the truth of A in manner indicated above. If the program determines A is true, then it checks the truth of B. If B is true, then it outputs 1. Now if either A or B is false, the AND program fails to halt.

“A library is a list of rules used by an implication program in its proofs. We assume here that the library is finite at any given time. A valid library is one that contains only valid rules.”

“Consider the implication program ⇒ deﬁned as follows. The program ⇒ expects as input a list of two statements [A, B]. Then it sets up and manipulates a list of sentences called the consequence list. The consequence list begins as a singleton list consisting only of A. The program ⇒ then goes to the library and chooses a rule. It applies the rule to the consequence list, and the result becomes the new consequence list. Since rules are required to include the input list as a sublist of the output list, once a statement appears on any consequence list it will appear on all subsequent consequence lists. After applying a rule, the program ⇒checks whether the consequent B is on the new consequence list. If so, it outputs 1; otherwise it chooses another rule, applies it to update the consequence list, and checks for B on the new consequence list. It continues to apply the rules in an exhaustive way until B is found, in which case ⇒outputs 1. If the consequent B is never found, the implication program ⇒does not halt.”

Take the Modus Ponens Program. This program expects an input list of statements, and from this it starts by forming empty result list. It searches the input list for any statement of the form [–>,[A,B],1] where A and B are statements. From all the statements, it searches to check if A is a statement on the input list. If A is found, then Modus Ponens Program adds B to the result list. The Program outputs a list that shows all the statements in the input list that are followed by all the statements of the result list (if any statements).

“The Modus Ponens program is a rule. A rule is valid if, for an input list of true statements, it only adds true statements. From the definition of –>, if [–>, [A,B],1] and A are on the input list and if they are both true and if the library is valid, then B will be true. So, MP is a valid rule if the library used by –> is valid. “

The EQ program expects an input list that contains [m,n], which are two natural numbers. Supposing m=n, then the EQ outputs 1, or outputs O. This is an example that some statements are clearly false. So let false be the false statement [EQ,[0,1],1]. If 0=1, then the EQ outputs 1, which is truth. This shows some statements are clearly false.

“Consider the program CURRY defined as follows. It expects a program X as input. Then it runs ⇒ as a subprocess with input [[X, X, 1], false]. The output of the subprocess (if any) is then used as the output of CURRY. If X checks for a particular property of programs, then the statement [X, X, 1] asserts that the program X has the very property for which it checks. The program CURRY when applied to program X can be thought of as trying to ﬁnd a proof by contradiction that the statement [X, X, 1] does not hold.”

There is only way that CURRY can output 1 with input X. This is done by if –> outputs 1 with input [[X,X,1}, false].  This is what lies behind the Ad Hoc Rule (AH).

The AH expects a list of statements as input. From there it begins producing an empty result list. It than checks its input for statements that take the form of [CURRY, X, 1] where X is a program. For all such statements on input list, AH adds the statement [–>,[[X,X,1], false] to the result list. The AH will than construct a result list, which contains statements in the input list followed by the statements of the result list (if any).

AH is a valid rule because the statements on the input list are true and AH only adds true statements to form the output list. AH is ad hoc because it is specifically designed for the CURRY program.

“We now describe an algorithmic version of the Curry paradox. We assume that the library is valid and contains MP and AH. Consider what happens when we run ⇒ with input [[ CURRY, CURRY, 1], false]. First a consequence list containing the statement [ CURRY, CURRY, 1] is set-up. Next rules from the library are applied to the consequence list. At some point the Ad Hoc Rule AH is applied and, since [ CURRY, CURRY, 1] is on the consequence list, [⇒, [[CURRY, CURRY, 1], false], 1] is added to the consequence list. Because of this, when MP is next applied to the consequence list, false will be added to the list. Since the initial input had the statement false as the second item on the input list, ⇒will halt with output 1 when false appears on the consequence list.”

So the Implication Program outputs 1 with input of [[CURRY, CURRY, 1], false]. Based on the definition of CURRY Program, it implies that CURRY outputs 1 as CURRY is given as an input. Basically, the statement [CURRY, CURRY, 1] is true. A false statement is true.

Suppose that –> is applied to [[CURRY, CURRY, 1], false]. Because the antecedent [CURRY, CURRY, 1] is true, all statements added to the consequence list will also be true. But the statement false is added to the consequences list, which means that false is true, which is a contradiction.

The Curry Paradox has occurred in a concrete setting of a perfectly well-defined program and careful reasoning about the expected behavior.

The Curry Paradox proves that any library containing the Modus Ponens program and Ad Hoc Rule are not valid. AH is unconditionally valid, so we can conclude that MP is not valid in the case where all the other rules in the library are valid.

“We conclude from this that there are valid inference rules (including MP) that are valid only so long as they are not included in the library of rules to be used. Informally, we can say that there are valid rules that one is not allowed to use (in an unrestricted manner) in one’s proofs. It is the very usage of the rule in inference that invalidates it.”

In order to maintain a valid open library, one must check that the rule is valid itself and that it remains valid when added to the library. A rule is independently valid if it is valid regardless of which library is used by the implication library. The Ad Hoc Rule is an example of an independently valid rule. Any library consistent of only independently valid rules is valid.

The Mods Ponens rule isn’t independently valid. The Modus Ponens rule is contingent on the nature of the library. The Curry Paradox itself provides an example of libraries which MP is not valid.

It is thought that the source of the paradox can be considered to be the misuse of MP. It is suggested that modus ponens is the source of the classical Curry paradox

## Lack of Knowledge implies Knowledge

Posted by allzermalmer on July 28, 2013

Socrates was once opined to have said that all he knows is that he doesn’t know anything, or I know that I don’t know.

There is a formal system known as epistemic logic. It deals with an epistemic operator, K. One of the epistemic logic is known as negative knowledge, in some sense.

Negative Knowledge: ~Kp –> K~Kp or CNKpKNKp

If I don’t know p then I know that I don’t know P. Not knowing p implies knowing that don’t know p.

If I don’t know what it looks like down at the center of the Earth (or Sun), then I know that I don’t know what it looks like down at the center of the Earth (or Sun).

Furthermore, from this Axiom, we may easily show that not knowing something implies knowing something.

All we need is our axiom of negative knowledge, CNKpKNKp, and the law of contraposition. This law, basically, states that we switch the antecedent (i.e. NKp) with the consequent (i.e. KNKp), and we negate both of those propositions when we switch their places.

By the law of contraposition and negative knowledge, we obtain CNKNKpNNKp.
Now we use the law of double negation to the consequent (i.e. NNKp), and we obtain CNKNKpKp.

We obtain that if we don’t know that we don’t something then we know something.

## Proof of Disjunctive Syllogism

Posted by allzermalmer on July 28, 2013

anguage

(I) Symbols: Ø = contradiction, → = conditional, and [] = Modal Operator
(II) Variables: p, q, r, p’, q’, r’. (Variables lower case)

Well Formed Formula for Language

(i) Ø and any variable is a modal sentence.
(ii) If A is a modal sentence, then []A is a modal sentence.
(iii) If A is a modal sentence and B is a modal sentence, then A implies B (A→B) is a modal sentence.

* A, B, and C are modal sentences, i.e. upper case letters are modal sentences. These upper case letters are “variables as well”. They represent the lower case variables in conjunction with contradiction, conditional, or modal operator.

So A may possibly stand for p, or q, or r. It may also possibly stand for a compound of variables and symbols. So A may stand for q, or A may stand for p→Ø, and etc.

Negation (~) = A→Ø
Conjunction (&) = ~(A→B)
Disjunction (v) = ~A→B
Biconditional (↔) = (A→B) & (B→A)

Because Ø indicates contradiction, Ø is always false. But by the truth table of material implication, A → Ø is true if and only if either A is false or Ø is true. But Ø can’t be true. So A → Ø is true if and only if A is false.

This symbol ∞ will stand for something being proved.

(1) Hypothesis (HY) : A new hypothesis may be added to a proof anytime, but the hypothesis begins a new sub-proof.

(2) Modus Ponens (MP) : If A implies B and A, then B must lie in exactly the same sub-proof.

(3) Conditional Proof (CP): When proof of B is derived from the hypothesis A, it follows that A implies B, where A implies B lies outside hypothesis A.

(4) Double Negation (DN): Removal of double negation ~~A & A lie in the same same sub-proof.

(5) Reiteration (R): Sentence A may be copied into a new sub-proof.

Proof of Disjunctive Syllogism: Because at least one disjunct must be true, by knowing one is false we can infer tat the other is true.

If either p or q and not p, then necessarily true q.

Premise (1) p v q (Hypothesis)
Premise (2) ~p (Hypothesis)
(3) ~p implies q ((1) and Definition v)
Conclusion (4) q (Modus Ponens by (2) and (3))

## Proof of Modus Tollens

Posted by allzermalmer on July 28, 2013

Language

(I) Symbols: Ø = contradiction, → = conditional, and [] = Modal Operator
(II) Variables: p, q, r, p’, q’, r’. (Variables lower case)

Well Formed Formula for Language

(i) Ø and any variable is a modal sentence.
(ii) If A is a modal sentence, then []A is a modal sentence.
(iii) If A is a modal sentence and B is a modal sentence, then A implies B (A→B) is a modal sentence.

* A, B, and C are modal sentences, i.e. upper case letters are modal sentences. These upper case letters are “variables as well”. They represent the lower case variables in conjunction with contradiction, conditional, or modal operator.

So A may possibly stand for p, or q, or r. It may also possibly stand for a compound of variables and symbols. So A may stand for q, or A may stand for p→Ø, and etc.

Negation (~) = A→Ø
Conjunction (&) = ~(A→B)
Disjunction (v) = ~A→B
Biconditional (↔) = (A→B) & (B→A)

Because Ø indicates contradiction, Ø is always false. But by the truth table of material implication, A → Ø is true if and only if either A is false or Ø is true. But Ø can’t be true. So A → Ø is true if and only if A is false.

This symbol ∞ will stand for something being proved.

(1) Hypothesis (HY) : A new hypothesis may be added to a proof anytime, but the hypothesis begins a new sub-proof.

(2) Modus Ponens (MP) : If A implies B and A, then B must lie in exactly the same sub-proof.

(3) Conditional Proof (CP): When proof of B is derived from the hypothesis A, it follows that A implies B, where A implies B lies outside hypothesis A.

(4) Double Negation (DN): Removal of double negation ~~A & A lie in the same same sub-proof.

(5) Reiteration (R): Sentence A may be copied into a new sub-proof.

Proof of Modus Tollens: Given the conditional claim that the consequent is true if the antecedent is true, and given that the consequent is false, we can infer that the antecedent is also false.

(If p implies q & ~q, then necessarily true that ~p)

Premise (1) p implies q (Hypothesis)
Premise (2) ~q (Hypothesis)
(3) q implies Ø ((2) and of Definition ~)
(4) p (Hypothesis)
(5) p implies q (Reiteration of (1))
(6) q (Modus Ponens by (4) and (5))
(7) q implies Ø (Reiteration of (3))
(8) Ø (Modus Ponens by (6) and (7))
(9) p implies Ø ( Conditional Proof by  (5) through (8))
Conclusion (10) ~p ((9) and Definition of ~)

Shortened version, with some steps omitted, would go as follows.

P (1) p implies q
P (2) ~q
(3) q implies Ø ((2) and Definition of ~)
(4) p (Hypothesis)
(5) q (Modus Ponens by (1) and (4))
(6) Ø (Modus Ponens by (3) and (5))
(7) p implies Ø (Conditional Proof by (3) through (6))
C (8)  ~p ((7) and Definition ~)

Here is an even shorter proof of Modus Tollens, and it only requires the rule of inference of Hypothetical Syllogism:

(1) p implies q (Hypothesis)
(2) q implies Ø (Hypothesis)
(3) p implies Ø (Hypothetical Syllogism by (1) and (2))
(4) ~p (Reiteration of (3) by Definition of ~)

So we have proved that If p implies q and ~q, then ~p is necessarily true.

## Science Aims for the Improbable

Posted by allzermalmer on July 27, 2013

Karl Popper helped to present the principle that separates scientific statements from non-scientific statements. This separation was based on principle of falsifiability.

One of the things that follows from the principle of falsifiability is that those scientific statements that are highly improbable have more scientific content. Those statements that are improbable say more about world.

The content of an empirical statement is based on simple logical observation.

Suppose that we have the statement (1) “Ravens won Superbowl 35 and Ravens won Superbowl 47”.

This statement is a conjunction, and joins two individual statements. These individual statements, respectively, are (i) “Ravens won Superbowl 35” and (ii) “Ravens won Superbowl 47”.

The content of statement (1) is greater than each part. For example, (i) says more than (ii), and vice versa. The Ravens winning Superbowl 35 doesn’t say anything about winning Superbowl 47, or vice versa.

Here is another example, but a more general example.

(2) All ravens in North America are black.

This statement is a conjunction, in some sense. Because we can say it has three parts. (i) All ravens in US are black, (ii) All ravens in Canada are black, and (iii) All ravens in Mexico are black.

So the content of (1) is greater than “The Ravens won Superbowl 47”, and the content of (2) is greater than “All ravens in US are black”.

Law of Content:

Content of (i) ≤ Content of (1) ≥ Content (ii)

Content of “the Ravens won Superbowl 35” ≤ Content of “the Ravens won Superbowl 35 & the Ravens won Superbowl 47” ≥ Content of “the Ravens won Superbowl 47”.

Content of (1) is greater than or equal to the Content of (i) and the Content of (ii).

Law of probability:

P(i) ≥ P(1) ≤ P(ii)

Probability of “the Ravens won Superbowl 35” ≥ Probability of “The Ravens won Superbowl 35 and the Ravens won Superbowl 47” ≤ Probability of “the Ravens won Superbowl 47”.

Probability of (1) is less than or equal to the Probability of (i) and the Probability of (ii).

So we immediately notice something. When we combine statements, the content increases and the probability decreases. So an increase in probability means a decrease in content, and increase in content means decrease in probability.

Let us work with the second example, i.e. (2) All ravens in North America are black.

This statement would only be true if all the individual parts of it are true. This means that (2) can only be true if all ravens in Canada are black, and all ravens in US are black, and all ravens in Mexico are black. Supposing that all ravens in Canada aren’t black because there exists a raven in Canada that is white, shows that (2) is false. It shows that (2) as a conjunction is false, and shows that (ii) is false. But this doesn’t show that (i) and (iii) are false. All ravens in US or Mexico are black, hasn’t been falsified yet.

(2) would be false because for a conjunction to be true both of it’s parts or conjuncts must also be true. If one of them is false, then the whole conjunction is false.

We immediately find that those empirical statements that have more content are going to have lower probability. And those empirical statements that have lower probability are also easier to falsify. It is easier to find out if they are false, and help us to make progresses.

For example, we find that hypothesis (2) has a lower probability than each of its parts. But finding out that this hypothesis is false by observation will eliminate either one of its conjunctions, like eliminate (ii) and not eliminating (i) and (iii).

These hypothesis not being eliminated means it opens progresses of science. It shows what is false, which informs us of a modification need to make. In making this modification we learn that (i) and (iii) haven’t been falsified. So our new hypothesis would have to contain both (i) and (iii), and also the falsification of (ii).

This new empirical statement would also have more content, it contains (i), ~(ii), and (iii), as part of its content. It contains both what hasn’t been shown false yet, and also shows what has been false.

## A Solipsist Can’t Falisfy their Falsifiable Hypothesis

Posted by allzermalmer on July 27, 2013

Karl Popper’s methodological system of Falsifiability, which is to demarcate between empirical statements or systems of statements from non-empirical statements or systems of statements, relies on empirical statements being “public” or “inter-subjectively criticizable”.

Popper goes on to say that “Only when certain events recur in accordance with rules or regularities, as is the case with repeatable experiments, can our observations be tested- in principle- by anyone. We do not take even our own observations quite seriously, or accept them as scientific observations, until we have repeated and tested them. Only by such repetitions can we convince ourselves that we are not dealing with mere isolated ‘coincidence’, but with events which, on account of their regularity and reproducibility, are in principle inter-subjectively testable.” The Logic of Scientific Discovery pg. 23

How would a Solipsist fit into methodological falsification, as an individual trying to take part in empirical statements?

One simple answer would be that Solipsist can’t take part in science or produce empirical statements. The solipsist cannot take part in science because their is no discussion to be had. Discussions involve more than one individual, and the Solipsist would be the only individual. One sock short of warm toes.

A solipsist, however, could produce in a weaker version of what Popper presents.

To do this the Solipsist weaker version of methodological falsification would have many things in common, but at least one difference.  The one difference is about the empirical statements for a Solipsist aren’t necessarily “public” or “inter-subjectively testable”.

Empirical statements would have to be contingent statements. A contingent statement is possible true and possibly false. It is possibly true the Ravens won the Superbowl and it is possibly false the Ravens won the Superbowl. So “the Ravens won the Superbowl” is a contingent statement.

Popper’s point about “public” appears to have one thing in common with a solipsist. Popper points out that “We do not take even our own observations quite seriously, or accept them as scientific observations, until we have repeated and tested them.” So a Solipsist would appear to meet this level that is mentioned.

So a Solipsist could make statements that are public, and check to see if the statements end up being shown false by future observations. But the only individual to check for observations that show it is false is the Solipsist. In principle, only the Solipsist could show their own statements are false.

From the obvious principle, it would mean that the Solipsist could not meet the second condition of being “public”, as laid out by Popper. “Only when certain events recur in accordance with rules or regularities, as is the case with repeatable experiments, can our observations be tested- in principle- by anyone.”

The Solipsist may produce a hypothetical system, and check the internal consistency of that system. The Solipsist makes sure that no contradictions may be derived from it, and may also check to see what statements may be derived that can be tested against observations. It finds that no contradictions are derived and may move on to check the system against some observations.

In the processes of looking for some observations, it is guided by the system being of a reproducible nature, and forbidden certain events from happening. So the Solipsist could hold to a statement that says “All x are y”, and goes looking for a single “x and ~y”. Such an observation would show the hypothesis is false.

For all practical purposes, the Solipsist would be going through the same mechanism without being “public” in the full sense of what Popper mentions.