Results 1 to 3 of 3

Thread: Aristotilian vs Modern Logic

  1. Top | #1
    Contributor
    Join Date
    Nov 2017
    Location
    seattle
    Posts
    5,034
    Rep Power
    13

    Aristotilian vs Modern Logic

    An overview of what is meant by Aristotilan logic. There are inherent problems with sylogistic logic. Ongoing dfebate in philosophy over form insteasd of substance.

    I see Aristotelian logic as more as a debate based on propositions and assertions. Modern logic encompasses a number of systems with distinct grammar and symbols. True or false inputs to a grammar which results in a true or false output.

    Now I see my disconnect with some of the posts on logic. Other than in general conversation I do not use syllogistic approaches. I live and die by the truth table.

    https://en.wikipedia.org/wiki/Term_logic

    In philosophy, term logic, also known as traditional logic, syllogistic logic or Aristotelian logic, is a loose name for an approach to logic that began with Aristotle and that was dominant until the advent of modern predicate logic in the late nineteenth century. This entry is an introduction to the term logic needed to understand philosophy texts written before it was replaced as a formal logic system by predicate logic. Readers lacking a grasp of the basic terminology and ideas of term logic can have difficulty understanding such texts, because their authors typically assumed an acquaintance with term logic.

    Decline of term logic[edit]

    Term logic began to decline in Europe during the Renaissance, when logicians like Rodolphus Agricola Phrisius (1444–1485) and Ramus (1515–1572) began to promote place logics. The logical tradition called Port-Royal Logic, or sometimes "traditional logic", saw propositions as combinations of ideas rather than of terms, but otherwise followed many of the conventions of term logic. It remained influential, especially in England, until the 19th century. Leibniz created a distinctive logical calculus, but nearly all of his work on logic remained unpublished and unremarked until Louis Couturat went through the Leibniz Nachlass around 1900, publishing his pioneering studies in logic.

    19th-century attempts to algebraize logic, such as the work of Boole (1815–1864) and Venn (1834–1923), typically yielded systems highly influenced by the term-logic tradition. The first predicate logic was that of Frege's landmark Begriffsschrift (1879), little read before 1950, in part because of its eccentric notation. Modern predicate logic as we know it began in the 1880s with the writings of Charles Sanders Peirce, who influenced Peano (1858–1932) and even more, Ernst Schröder (1841–1902). It reached fruition in the hands of Bertrand Russell and A. N. Whitehead, whose Principia Mathematica (1910–13) made use of a variant of Peano's predicate logic.

    Term logic also survived to some extent in traditional Roman Catholic education, especially in seminaries. Medieval Catholic theology, especially the writings of Thomas Aquinas, had a powerfully Aristotelean cast, and thus term logic became a part of Catholic theological reasoning. For example, Joyce's Principles of Logic (1908; 3rd edition 1949), written for use in Catholic seminaries, made no mention of Frege or of Bertrand Russell.[11]

    Revival[edit]

    Some philosophers have complained that predicate logic:
    Is unnatural in a sense, in that its syntax does not follow the syntax of the sentences that figure in our everyday reasoning. It is, as Quine acknowledged, "Procrustean," employing an artificial language of function and argument, quantifier, and bound variable.
    Suffers from theoretical problems, probably the most serious being empty names and identity statements.

    Even academic philosophers entirely in the mainstream, such as Gareth Evans, have written as follows:
    "I come to semantic investigations with a preference for homophonic theories; theories which try to take serious account of the syntactic and semantic devices which actually exist in the language ...I would prefer [such] a theory ... over a theory which is only able to deal with [sentences of the form "all A's are B's"] by "discovering" hidden logical constants ... The objection would not be that such [Fregean] truth conditions are not correct, but that, in a sense which we would all dearly love to have more exactly explained, the syntactic shape of the sentence is treated as so much misleading surface structure" (Evans 1977)

  2. Top | #2
    Contributor
    Join Date
    Nov 2017
    Location
    seattle
    Posts
    5,034
    Rep Power
    13
    Predicate logic compared to term logic. Modern logic is predicate logic. Input variables that are true or false and an output that is true or false.

    https://en.wikipedia.org/wiki/First-order_logic

    First-order logic—also known as predicate logic and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable.[1] This distinguishes it from propositional logic, which does not use quantifiers or relations;[2] in this sense, propositional logic is the foundation of first-order logic.

    A theory about a topic is usually a first-order logic together with a specified domain of discourse over which the quantified variables range, finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold for those things. Sometimes "theory" is understood in a more formal sense, which is just a set of sentences in first-order logic.

    The adjective "first-order" distinguishes first-order logic from higher-order logic in which there are predicates having predicates or functions as arguments, or in which one or both of predicate quantifiers or function quantifiers are permitted.[3] In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.

    There are many deductive systems for first-order logic which are both sound (all provable statements are true in all models) and complete (all statements which are true in all models are provable). Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem.

    First-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures (that is, categorical axiom systems) can be obtained in stronger logics such as second-order logic.

    The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce.[4] For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001).

    Syntax[edit]

    There are two key parts of first-order logic. The syntax determines which collections of symbols are legal expressions in first-order logic, while the semantics determine the meanings behind these expressions. Symbols and syntax are arbitray. Boolean Algerbra. Wheteher syntax or definitions violate Aristotle is irrelevant.

    Alphabet[edit]

    Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is legal. There are two key types of legal expressions: terms, which intuitively represent objects, and formulas, which intuitively express predicates that can be true or false. The terms and formulas of first-order logic are strings of symbols, where all the symbols together form the alphabet of the language. As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols.

    It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol ∧ {\displaystyle \land } \land always represents "and"; it is never interpreted as "or". On the other hand, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate, depending on the interpretation at hand.

  3. Top | #3
    Contributor
    Join Date
    Nov 2017
    Location
    seattle
    Posts
    5,034
    Rep Power
    13
    On one of EBs thread he posted a syslogism. Then asked if it is vaslid.

    p1 car is black
    p2 car is whiyr
    c cat is both black and white.

    A logical fallacy, sometghuing can bnot be one thing at the same time. p or !p is a contradiction, which is the problem with term logic.

    In predicate logic there are specific rules and definitions, there are no fallacies or contradictions/

    In Boolean I can write
    p a logic variable that is either true or false
    c a binary variable that is either true or false

    ! negation or inversion
    And & truth table

    a b c
    f f f
    f t f
    t f f
    t t t
    .
    c = p & !p, by the rules and definition p & !p is not a contradiction. C will always evaluate to false by the and truth table. It has no practicle value, but in Bollean it is not a contradiction of falasy.

    Same with or !!

    c = p || !p, c is alwsy true.

    Boolean Algebra is mathematical logic and symbolic logic. It is essentially digital electronics.


    https://en.wikipedia.org/wiki/Boolean_algebra

    Boolean algebra

    In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. Instead of elementary algebra where the values of the variables are numbers, and the prime operations are addition and multiplication, the main operations of Boolean algebra are the conjunction and denoted as ∧, the disjunction or denoted as ∨, and the negation not denoted as ¬. It is thus a formalism for describing logical relations in the same way that elementary algebra describes numeric relations.

    Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854).[1] According to Huntington, the term "Boolean algebra" was first suggested by Sheffer in 1913,[2] although Charles Sanders Peirce in 1880 gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics".[3] Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.[4]

    History[edit]

    Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields.[5] In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, and others until it reached the modern conception of an (abstract) mathematical structure.[5] For example, the empirical observation that one can manipulate expressions in the algebra of sets by translating them into expressions in Boole's algebra is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.

    In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In circuit engineering settings today, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[6][7][8] Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification.[9]

    Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.[10][11][12] Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics.[5] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity.


    Propositional logic[edit]
    Main article: Propositional calculus

    Propositional logic is a logical system that is intimately connected to Boolean algebra.[4] Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra.

    Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x,y… become propositional variables (or atoms) P,Q,…, Boolean terms such as x∨y become propositional formulas P∨Q, 0 becomes false or ⊥, and 1 becomes true or T. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ,… as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions.

    The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra).

    These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ∨¬Ψ) ∧ (¬Φ∨Ψ) and (Φ∧Ψ) ∨ (¬Φ∧¬Ψ). If → is in the language these last tautologies can also be written as (Φ→Ψ) ∧ (Ψ→Φ), or as two separate theorems Φ→Ψ and Ψ→Φ; if ≡ is available then the single tautology Φ ≡ Ψ can be used.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •