by Max Barry

Latest Forum Topics

Advertisement

1

DispatchFactbookReligion

by The Federal Republic of Alpenburg. . 457 reads.

Electrical Engineering

[size=150][b]Application of Mathematics & Science[/b][/size]

[hr]

The discipline of electrical engineering is an ideology of technology. It is an institution and system of concepts (ideas) that is an epistemic philosophy as an epistemology of theories and practices. Its logical structure is a [url=https://www.nationstates.net/page=dispatch/id=1346583]paradigm[/url] (the historical [i]episteme[/i] of [url=https://www.nationstates.net/page=dispatch/id=1556955]Michel Foucault[/url] and the technical artifice of [i]techne[/i]), an artificial process of cognition of conscious subjects for interaction (interpretation, evaluation, communication and production) of the objects of natural, universal or physical experience. This discourse is an imagination of existence that relates to the real conditions of existence. Louis Althusser proposes that whilst ideologies possess different forms, their function is similar in history. An ideology constitutes the subject transformed from an individual person. The recognition of the identity of ego by conscience occurs internal of ideology, a model or structure that is impossible to be external in correspondence to an object of reality. Religion and morality are ideologies (as argued by [url=https://www.nationstates.net/page=dispatch/id=1282502#Ethics]Friedrich Nietzsche[/url]), and for many electrical engineers in Atlantis its discipline and principles is their doctrine and creed. Their society (social organisation) is the Institute of Electrical and Electronics Engineers (IEEE, which is a technical and professional association with its corporate headquarters in New York), as the guardians of the order of the [url=https://www.nationstates.net/page=dispatch/id=225540#Footnote6]electron[/url] and the devout of their heroes and saints. Electrical engineering is an application of the pure and fundamental mathematics and sciences of mathematicians and scientists, similar to the medics of medicine with the physical (natural and material) and empirical sciences. The following discussion includes some of the principal concepts whose mathematics are proof of its rites and cults.

[anchor=Phil][size=125][b]Philosophy[/b][/size][/anchor]

[url=https://www.nationstates.net/page=dispatch/id=1346583]Philosophy[/url] is a method of conceptual elucidation in science. As with epistemological subjectivity and objectivity, in science a conflict exists between realists and relativists who respectively argue that the description of the natural world is a true reality or a social construct. Similarly, philosophers debate the existence of mathematical entities is absolute (eternal and abstract ideas, and universal and certain objects) or fallible (corrigible and incomplete beliefs, and revisable and uncertain truths). George Boole, Augustus De Morgan, Frege, Russell (in [i]Principia Mathematica[/i] with Alfred North Whitehead), and Wittgenstein incorporated logic as [url=https://www.nationstates.net/page=dispatch/id=1167374]mathematics[/url]. Henri Poincaré dissented with [url=https://www.nationstates.net/page=dispatch/id=1346583#TLP]Frege and Russel[/url] by arguing that mathematics was discipline of intuition and not logic because it is synthetic (propositions veritable by their experience and relation with the world) and not analytic (the virtue of the significance of the concepts they express). Alan Turing argued with Wittgenstein who claimed that mathematics did not discover absolute truth but invented it. The two men encountered each other in a [url=https://rhizome.org/editorial/2013/mar/19/queer-computing-2/]contact (intercourse)[/url] in a seminar lectures of the value of the formalism of mathematics. They missed and passes each other in comprehension and connection in these discussions. Turing, as a founder of computation, would argue that the logic of mathematics is a schematic combination of the faculties of intuition and ingenuity. In these views, mathematics is either discovered or invented. [url=https://www.nationstates.net/page=dispatch/id=1346583#TLP]Wittgenstein[/url] proposed that mathematics consists of "language games", which are practices governed by norms that provide [url=https://www.nationstates.net/page=dispatch/id=1288113#Sem]significance[/url] to symbolism of concepts and ideas. These norms are of traditional, cultural and social origins, not logical necessity. Inspired by the scepticism of [url=https://www.nationstates.net/page=dispatch/id=1346583]Hume[/url], this fallibilism (common to [url=https://www.nationstates.net/page=dispatch/id=1346583#Sci]Popper[/url]) argued that no mathematical definitions or proofs are final, instead they are only accepted on the basis of authority and not by the conclusive justification of logic or reason. 

In Atlantean myth, a divine [url=https://www.nationstates.net/page=dispatch/id=1113411#Etym]cony[/url] with a mortar and pestle is a recognisable (perceivable and conceivable) illusion (i.e. erroneous and incorrect) of a familiar object, profile, figure, image or form as pareidolia of the lunar [i]maria[/i] (plural of [i]mare[/i]). This psychic phenomenon of a [url=https://www.nationstates.net/page=dispatch/id=1346583#Virtual]simulacrum[/url] (representation and formation in imagination) from the vague, aleatory, indistinct and indeterminate stimulus is analogous to the mythic and iconic constellation. The cony is referred to as a [i]fenek[/i] in [url=https://www.nationstates.net/page=dispatch/id=1113411#Hist]Maltese[/url] from the Arabic فَنَك or [i]fanak[/i] for a vulpine fox (ἀλώπηξ or [i]alṓpēx[/i], the origin of the deficiency, insufficiency or scarcity of "alopecia") of the [url=https://www.nationstates.net/page=dispatch/id=1458525]Sahara[/url]. A terrier hound chases these animals that burrow (bury and covey) in terrestrial and buccal cubicles, caves, cavities, holes, hollows or bouns (e.g., a [i]clap(eri)us[/i] in a warren). The name [i]dassie[/i] refers to the hyrax as a Dutch diminutive of [i]das[/i] (cf. German [i]Dachs[/i]) for badger ([i]brockos[/i] or "brock" from the Celtic, a cognate of [i]broccus[/i] that was combined with [i]truncus[/i] or "trunk") that is known as [i]tasugo[/i] in Spanish, [i]teixugo[/i] in Galician, and [i]texugo[/i] in Portuguese from the Germanic Gothic and related to the Latin (via Celtic [i]tasgos[/i]) as in the Italian [i]tasso[/i], Spanish [i]tejo(-ón)[/i], Catalan [i]teixó[/i], Galician-Portuguese [i]teixo[/i], which is not to be confused with the Scythian-origin [i]taxus[/i] for "yew". In French this mammal animal is known as [i]blaireau[/i] from the Celtic Gaulish or Germanic Frankish [i]blar[/i]. The others are related to the Latin [i]tela[/i] for "text, textile, tissue, fabric, membrane, web" as in "to weave" or [i]tessere[/i] in Italian, [i]tejer[/i] in Spanish, [i]teixir[/i] in Catalan, [i]tecer[/i] in Portuguese and Galician, and [i]tisser[/i] in French. It is a relative to technical and architectural production and natural and artificial generation with the Greek τέχνη or [i]tékhnē[/i] for the construction of structure and artifice. Erasmus (cognate with the Greek [i]éramai[/i] and Sanskirt [i]rámate[/i] for "I love") in the [i]Adagia[/i] (a record of the humanist [i]sententiae[/i] and adages or expressions of abstraction) wrote that "a fox knows many things, but a hedgehog one important thing" ([i]multa novit vulpes, verum echinus unum magnum[/i]). This influenced philosophical classification. Urchin hedgehogs that view the world with the lens of a sole idea or concept include [url=https://www.nationstates.net/page=dispatch/id=1167374]Plato[/url], [url=https://www.nationstates.net/page=dispatch/id=1094409#Myth]Dante[/url], [url=https://www.nationstates.net/page=dispatch/id=1167374]Pascal[/url], [url=https://www.nationstates.net/page=dispatch/id=1106976#Context]Nietzsche[/url], and [url=https://www.nationstates.net/page=dispatch/id=1197243]Proust[/url]. Foxes that view the world with multiple experiences or convictions include [url=https://www.nationstates.net/page=dispatch/id=1282502#Human]Aristotle[/url], Erasmus, [url=https://www.nationstates.net/page=dispatch/id=946041]Shakespeare[/url], and [url=https://www.nationstates.net/page=dispatch/id=1197243]Goethe[/url]. Wittgenstein transformed himself as a hedgehog by nature to a fox by intellectual imagination in his philosophic transition. This humorous system of classification is similar to that proposed by [url=https://www.ams.org/notices/200902/rtx090200212p.pdf]Freeman Dyson[/url], which distinguished an avian bird (fowl) from an amphibian frog (toad). A bird views the world as a physical unification of cognitive concepts with the mathematics (an art and a science) of natural philosophy. A frog views the world in observation and experimentation of facts, details and particulars. These equally important perspectives influence the formation of scientific theories. Birds are often mystic, such as Aristotle, Plato, [url=https://www.nationstates.net/page=dispatch/id=1378729]Newton[/url], [url=https://www.nationstates.net/page=dispatch/id=1231481#Footnote6]Kepler[/url] and [url=https://www.nationstates.net/page=dispatch/id=1282502#ET]Einstein[/url]. 

Francis Bacon, who was a figurative frog, first proposed the induction of the scientific method for the investigation of the physical (natural) [url=https://iep.utm.edu/lawofnat/]laws[/url] of the world (Nature and Cosmos of factual and not logical verity, universal or statistical expressions, and conditional not categorical conceptions), in contrast to Descartes (a bird) with his deduction. The incomplete [url=https://www.nationstates.net/page=dispatch/id=251964#Cities]utopic[/url] novel [i]New Atlantis[/i] by Bacon was published posthumously in 11326 HE. It depicts and envisions a society whose foundation is a scientific institution that conducts experiments using the organon (a process, system or method) he proposed in [url=https://plato.stanford.edu/entries/francis-bacon/][i]Novum Organum[/i][/url]. The cover illustrated a galleon (a naval galley symbolic of empirical investigation and observation in natural philosophy of fact as the mental activity of experience with reason)  passing the mythical [url=https://www.nationstates.net/page=dispatch/id=1094409#Myth]Columns of Hercules (Hercales)[/url] at the [url=https://www.nationstates.net/page=dispatch/id=1458525]Strait of Gibraltar[/url] or the ostium of the Mediterranean Sea to the Atlantic Ocean. He proposed a "new organ" of logic and syllogism (conclusions from propositions of notions, presumptions and premises). Bacon divided physical science in natural philosophy as physics (particular and variable causes) and metaphysics (general and constant causes). His method reduces the realm of apparitions to a reality accessible for manipulation. In this reduction of [i]a posteriori[/i] induction, general axioms or universal principles are informed by the special particulars of the interpretations, impressions and observations of senses. This contrasts with the [i]a priori[/i] deduction of Aristotle, which Bacon criticises as an impediment to natural philosophy. Descartes, a contemporary of Bacon, advanced a rational, theoretical and deductive descent that diverged from this empirical, practical and inductive ascent of Bacon. For Descartes, the objective was absolute verity, whilst for Bacon it was relative order of natural phenomena (causes). Bacon rejected the inferences of essential [i]anticipatio naturae[/i] ("anticipation of nature", with its conservative convention, conjecture, computation, prediction, projection, prevision and speculation) in favour of existential [i]interpretatio naturae[/i] ("interpretation of nature") from a progressive collection of observable facts and methodical investigation of the complexity of Nature. Bacon argued forms and [url=https://www.nationstates.net/page=dispatch/id=1378729#Norm]causes[/url] (material or substantial, formal or ideal, kinetic or efficient, and functional or final) are the universal physics of actual effects. Bacon rejected the latter cause in the natural (not the artificial) for its superstitious conflation of theology and teleology in cosmology. The obstacles of critical examination are the idols of the tribe ([i]idola tribus[/i]), the idols of the [url=https://www.nationstates.net/page=dispatch/id=1282502#Real]cave[/url] ([i]idola specus[/i]), idols of the market ([i]idola fori[/i]) and idols of the theatre ([i]idola theatri[/i]). False idols are intellectual obfuscations and fallacies that originate from the cognitive malalignment of the conceptual reflections of imagination and its predispositions, suppositions and prejudiced generalities. Bacon argued humanity is a servant and interpreter of Nature and its phenomena and qualia.

[anchor=Found][size=125][b]Foundations[/b][/size][/anchor]

The German mathematician Carl Friedrich Gauss (Gauß) believed comprehension of [url=https://www.nationstates.net/page=dispatch/id=1167374]Euler's identity[/url] to be a point of reference for importance in mathematics. Gauss considered the identity of Euler to be the [i]pons asinorum[/i] ("bridge of asses") in mathematics. This name refers to a proposition of geometry by the Greek mathematician Euclid of the city of Alexandria in Ptolemaic Egypt. It states the angles opposite the equal sides of an isosceles triangle are equal. As a metaphor, the name signifies a critical problem or test that functions to distinguish or separate a person by their intelligence. In the mathematical tract [i]Elements[/i] (Στοιχεῖον or [i]Stoikheîon[/i]), the name [i]Dulcarnon[/i] (from the Arabic ذُو ٱلْقَرْنَيْن‎ or [i]Ḏū al-Qarnayn[/i] for "he of the two horns" as in [url=https://www.nationstates.net/page=dispatch/id=1152733]Alexander the Great and Cyrus the Great[/url]) refers to the [url=https://www.nationstates.net/page=dispatch/id=1167374]Pythagorean Theorem[/url]. The three-dimensions of space (with breadth, height and profundity) are defined by three axes with either (1) the Cartesian (named after René Descartes, with a Latin family name [i]Cartesius[/i] and whose personal name originates from the Latin [i]Renatus[/i] as in "revive, resuscitate, reanimate, renovate, reincarnate, regenerate" and with the cognate of [i]Renato[/i], first published the system that is fundamental to calculus in mathematics) coordinates of longitude (abscissa or horizontal distance) [i]x[/i], latitude (ordinate or lateral distance) [i]y[/i], and altitude (applicate or vertical distance) [i]z[/i]; (2) the cylindrical coordinates of polar radius (radial distance) [i]ρ[/i] or [i]r[/i], azimuth (polar angle or angular position) [i]φ[/i] or [i]θ[/i], and altitude (axial position or normal distance to the polar plane) [i]z[/i]; or (3) the spherical coordinates of polar radius (radial distance) [i]ρ[/i] or [i]r[/i], zenith (polar angle, [url=https://www.nationstates.net/page=dispatch/id=1231481]inclination[/url] or colatitude, which is 90 degrees or ° and [i]π[/i]/2 radians plus the elevation or latitude with respect to the normal axial direction) [i]θ[/i], and azimuth (longitude) [i]φ[/i]. Each of the three coordinate systems are related by trigonometric functions of geometry.

In "logical perfection", Gauss was known for his inclusion of synthesis, and omission of analysis. He, prior to the modern invention of the "fast" ([i]divide et impera[/i], or "division and conquest") signal processing algorithm for the discrete [url=https://www.nationstates.net/page=dispatch/id=946041#Local]Fourier transform[/url] (a transformation by Joseph Fourier that decomposes a temporal or spatial function and signal into its constituent domain of frequencies), proposed trigonometric interpolation as a method. As a complex function of a real variable that transforms real variables, the transformation is similar to the complex function of a complex variable that transforms real variables named for Pierre-Simon Laplace (notable for his advance of celestial mechanics). He studied in astronomy the gravitational and orbital mechanics of the solar system. In proof, his treatment of the optimisation and approximation method of the minimum quadrates ([url=https://www.nationstates.net/page=dispatch/id=1378729#Neuro]minimisation[/url] of error by the sum of the squares of the residual differences of the estivated values and observed data) for a system with more equations than variables ([i]quantitas incognita[/i]) that determine it. This he proved with his normal distribution of the probability of a continuous (aleatory or stochastic) variable with a real value and an expectation (mean or [i]media[/i], median, mode, variance and typical deviation) in statistics. Other distributions include that of Siméon-Denis Poisson, the binomial of Jacob Bernoulli, and that of John William Strutt (the Baron of Rayleigh).

Laplace introduced a theorem, first proven by Thomas Bayes, that provided the probabilistic limits of an event. Applied to statistical [url=https://www.nationstates.net/page=dispatch/id=1378729#Neuro]inference[/url], [url=https://maxbarry.com/2021/04/07/news.html]Bayesian inference[/url] relates the posterior probability of a hypothesis [i]H[/i] conditional (| or contingent) to the observation of event data [i]D[/i] as evidence to the product (·) of the prior probability of a hypothesis and probability (P for a probability density function for continuous variables or a probability mass function for discrete variables) of an event as function of the evidence conditional to the hypothesis, normalised by the probability of the marginal model evidence. This can be written as:

[list]P([i]H[/i] | [i]D[/i]) = P([i]D[/i] | [i]H[/i]) · P([i]H[/i]) / P([i]D[/i])[/list]

where the contingencies, with ¬ for "not" or the negation, ⋃ (∨) for "or" or the union (disjunction) and ⋂ (∧) for "and" or the intersection (conjunction), are:

[list][*]P([i]D[/i]) = P([i]D[/i] | [i]H[/i]) · P([i]H[/i]) +  P([i]D[/i] | ¬[i]H[/i]) · P(¬[i]H[/i]) = P(([i]D[/i] ⋂ [i]H[/i]) ⋃ ([i]D[/i] ⋂ ¬[i]H[/i])) = P([i]H[/i] | [i]D[/i]) · P([i]D[/i]) + P(¬[i]H[/i] | [i]D[/i]) · P([i]D[/i]) =  P(([i]H[/i] ⋂ [i]D[/i]) ⋃ (¬[i]H[/i] ⋂ [i]D[/i]));

[*]P(¬[i]D[/i]) = 1 − P([i]D[/i]) = P(¬[i]D[/i] | [i]H[/i]) · P([i]H[/i]) + P(¬[i]D[/i] | ¬[i]H[/i]) · P(¬[i]H[/i]) = P((¬[i]D[/i] ⋂ [i]H[/i]) ⋃ (¬[i]D[/i] ⋂ ¬[i]H[/i])) = P([i]H[/i] | ¬[i]D[/i]) · P(¬[i]D[/i]) + P(¬[i]H[/i] | ¬[i]D[/i]) · P(¬[i]D[/i]) =  P(([i]H[/i] ⋂ ¬[i]D[/i]) ⋃ (¬[i]H[/i] ⋂ ¬[i]D[/i]));

[*]P([i]H[/i]) = P([i]D[/i] | [i]H[/i]) · P([i]H[/i]) +  P(¬[i]D[/i] | [i]H[/i]) · P([i]H[/i]) =  P(([i]D[/i] ⋂ [i]H[/i]) ⋃ (¬[i]D[/i] ⋂ [i]H[/i])) = P([i]H[/i] | [i]D[/i]) · P([i]D[/i]) + P([i]H[/i] | ¬[i]D[/i]) · P(¬[i]D[/i]) = P(([i]H[/i] ⋂ [i]D[/i]) ⋃ ([i]H[/i] ⋂ ¬[i]D[/i]));

[*]P(¬[i]H[/i]) = 1 − P([i]H[/i]) = P([i]D[/i] | ¬[i]H[/i]) · P(¬[i]H[/i]) +  P(¬[i]D[/i] | ¬[i]H[/i]) · P(¬[i]H[/i]) =  P(([i]D[/i] ⋂ ¬[i]H[/i]) ⋃ (¬[i]D[/i] ⋂ ¬[i]H[/i])) = P(¬[i]H[/i] | [i]D[/i]) · P([i]D[/i]) + P(¬[i]H[/i] | ¬[i]D[/i]) · P(¬[i]D[/i]) = P((¬[i]H[/i] ⋂ [i]D[/i]) ⋃ (¬[i]H[/i] ⋂ ¬[i]D[/i])).[/list]

The theorem results in P([i]H[/i] ⋂ [i]D[/i]) = P([i]H[/i] | [i]D[/i]) · P([i]D[/i]) = P([i]D[/i] ⋂ [i]H[/i]) = P([i]D[/i] | [i]H[/i]) · P([i]H[/i]) for the joint (conjoined or bivariate) probability of dependent events (for independent events this respectively equals P([i]H[/i]) · P([i]D[/i]) and P([i]D[/i]) · P([i]H[/i])). The predictive (previewed) prior and posterior distributions are the result of the marginalisation (the collection of the sub ensemble of probabilities of the aleatory variables without reference to the other values) of the probabilistic distribution of a possible value of an event for its observations conditional to its prior and posterior distributions (for the parameter and hyperparameter prior and posterior to the observation of an event). The theorem is extendable as a general formulation to multiple events in a sequence of independent and identically distributed (iid) observations ([b]E[/b]∈[i]E[/i][sub][i]n[/i][/sub]) where a model is represented by an event ([b]M[/b]∈[i]M[/i][sub][i]m[/i][/sub]). Thus, the posterior probability P([b]M[/b] | [b]E[/b]) is the quotient of the quantity of the product of P([b]E[/b] | [b]M[/b]) and P([b]M[/b]) (the prior probability, i.e. the consequence of antecedents) with the divisor as the summation of the product P([b]E[/b] | [i]M[/i][sub][i]m[/i][/sub]) and P([i]M[/i][sub][i]m[/i][/sub]) for [i]m[/i] models. The function of verisimilitude P([b]E[/b] | [b]M[/b]) is the product (Π) of the sequence (factors) P([i]E[/i][sub][i]i[/i][/sub] | [b]M[/b]) for the index of multiplication [i]i[/i] as an element of [i]n[/i] observed events.

The [i]magnus opus[/i] of Gauss, [i]Disquisitiones Arithmeticae[/i], investigated arithmetic and numbers. He said that "mathematics is the queen [[i]Königin[/i], the feminine form of the masculine [i]König[/i] or "king"] of the sciences—and arithmetic is the queen of mathematics". In the text he introduces congruencies and illustrates the Chinese remainder theorem (first described as a problem by the master 孙子 or [i]s(y)un / sung / sen(g) zi / chi[/i], with an algorithm for its resolution described by [url=https://www.nationstates.net/page=dispatch/id=1231481#Footnote3]Aryabhata[/url], with its special cases discussed by  [url=https://www.nationstates.net/page=dispatch/id=1231481#CCS]Brahmagupta[/url] notable for zero and in the [i]Liber Abaci[/i] of Fibonacci, and with a general solution proven in a constructive demonstration by 秦九韶 or [i]qin / chin / cing jiu / giu / gau / kiu / kau shao / si(a)u / zau / s(i)eu[/i]). The modulus is the base in modular arithmetic for the congruence relation (an [url=https://www.nationstates.net/page=dispatch/id=1346583#TLP]equivalence[/url] relation) of algebraic structures (e.g., the group of integers, or zero, the positive natural numbers and their negative opposites). For a modulus (an integer [i]n[/i] > 1), two integers [i]a[/i] and [i]b[/i] are congruent modulo [i]n[/i] if [i]n[/i] is a divisor of their difference (i.e., if there is an integer [i]k[/i] such that [i]a[/i] − [i]b[/i] = [i]kn[/i]). This is notated as [i]a[/i] ≡ [i]b[/i] (mod [i]n[/i]). The absence of the parentheses indicates the binary modulo operation, or the remainder of Euclidean division of [i]b[/i] by [i]n[/i], where [i]b[/i] is the dividend and [i]n[/i] is the divisor. The congruence modulo [i]n[/i] asserts that [i]a[/i] and [i]b[/i] have the an equal remainder when divided by [i]n[/i]. That is,

[list][i]a[/i] = [i]pn[/i] + [i]r[/i][/list]
[list][i]b[/i] = [i]qn[/i] + [i]r[/i][/list]

where 0 ≤ [i]r[/i] < [i]n[/i] is the common remainder. The congruence relation is therefore [i]a[/i] = [i]kn[/i] + [i]b[/i] when [i]k[/i] = [i]p[/i] − [i]q[/i].  For divisors or moduli [i]m[/i][sub]1[/sub], …, [i]m[/i][sub]n[/sub] as integers > 1, the [url=https://crypto.stanford.edu/pbc/notes/numbertheory/crt.html]remainder theorem[/url] states that if these are prime numbers with each other (coprimes, where the greatest common divisor between [i]m[/i][sub]i[/sub] and [i]m[/i][sub]j[/sub], when [i]i[/i] ≠ [i]j[/i], solely is 1; the [url=https://crypto.stanford.edu/pbc/notes/numbertheory/euclid.html]Euclidean algorithm[/url] efficiently computes this compared to factorisation), then in a system of equations or congruences with integers [i]a[/i][sub]1[/sub], …, [i]a[/i][sub]n[/sub], or 

[list][i]x[/i] ≡ [i]a[/i][sub]1[/sub] (mod [i]m[/i][sub]1[/sub])[/list]
[list]⋮[/list]
[list][i]x[/i] ≡ [i]a[/i][sub][i]n[/i][/sub] (mod [i]m[/i][sub][i]n[/i][/sub])[/list]

there is one unique solution [i]x[/i] modulo [i]M[/i], where [i]M[/i] = [i]m[/i][sub]1[/sub] ⋯ [i]m[/i][sub]n[/sub]. 

In addition to statistics and numbers, Gauss was interested in differential and integral geometry, with its theory of plane and space curves. The curvature of surfaces and varieties is measurable by the angles, distances and rhythms that determine them. It was influenced by infinitesimal calculus, the mathematical study of the differential gradient fluxion and integral fluent function of a value or quantity that varies in dependence of variables. The priority strife between Isaac Newton and Gottfried Wilhelm Leibniz over the invention (conception and publication) of their ideas has been concluded to be independent of each other. The two mathematicians invented different notations, with two additional created by Euler and Joseph-Louis Lagrange. Additionally, Gauss discovered geometries that were not [url=https://www.nationstates.net/page=dispatch/id=1163728]Euclidean[/url], such that it is the intersection of metric geometry and affine geometry to include [url=https://www.nationstates.net/page=dispatch/id=1378729#Maths]hyperbolic and elliptic[/url] geometries. This would permit Einstein's [url=https://www.nationstates.net/page=dispatch/id=1282502#ET]general theory of relativity[/url] that united in description gravity (gravitation) as a property of four dimensions (space and time). It related the curvature of spacetime to the energy and momentum of present matter and radiation. Maxwell's equations are compatible with Einstein's special and general theories of relativity. Einstein, without possession of mathematical ability, respected mathematics for its power and beauty.

The human mind processes the displacement (motion, which manifests as change in the directions or dimensions of space with respect to time), affine transformations (translation, reflection, dilatation, contraction, rotation, and transvection), and perspective (projection) observed in the visual field (sensory vision and object recognition by the collection and transduction of a [url=https://www.nationstates.net/page=dispatch/id=1288113#Sem]signal[/url]). In descriptive graphical representation, the rectilinear rays of projection of an object in three-dimensional space are parallel, intersect orthogonal or oblique with the two-dimensional picture or plane of image. In perspective, parallel lines appear to converge at a point of fugue or flight (if the parallel lines are orthogonal to the plane of image, the point corresponds to the oculus, the location or station of the ocular observer). The intersection (i.e., not a void ensemble) of geometric objects occurs at a point (of two lines, or a line and a plane) or a common ensemble of points (a line where two planes, or a line and a plane, intersect) in space. Algebra, with its algorithmic foundations and regulations, extended arithmetic (and its binary operations, varying in the properties of association, commutation, and distribution) with the implementation of abstract structures (e.g., variables, functions, matrices, and vectors). These vectors, or geometric quantities with magnitude (module or absolute value norm as a scalar with a unit) and direction (orientation and sense in reference to the referential basis and order), have a course in space and momentum in motion. The position of these vectors is defined by its coordinate system. They can be normalised to unit vectors whose linear combination (with the coordinates as coefficients) can be written as each vector in space, if its basis is formed by a linearly independent system of these unit vectors as elements that generate ("span") the vector space (whose dimension is the cardinality of the basis). In the canonical basis, the unit vectors are mutually orthogonal (perpendicular, or normal to the tangent plane of a surface). A vector is an eigenvector ("own, proper, self") if a linear transformation (operator or application) is a scalar (called an eigenvalue) multiple of that vector. In finite-dimensional vector space, the linear transformation, which does not mutate the orientation of the vector, can be expressed as a matrix. With differentiation (continuous and instantaneous variation) and integration (summation of definite and infinite series of quantities), infinitesimals (functional limits) would progress this further (e.g., convolution and correlation).

The extension of the differentiation and integration calculus of one variable to functions with multiple (independent) variables permits the study of dynamics of systems with multiple [url=https://www.nationstates.net/page=dispatch/id=1378729#Info]degrees of freedom[/url]. The domain of one-dimensional curves (with longitude) and two-dimensional surfaces (with area) corresponds to [i]n[/i]-dimensional Euclidean space (real coordinate space of dimension [i]n[/i] as the codomain). A scalar field of [i]n[/i]-dimensions corresponds to one-dimensional space of numbers, values or quantities. The application of Lagrange multipliers to locate maxima and minima (plural of maximum or minimum, analogous to extrema and extremum) of a function is a method of optimisation that subjects the function to equality constraints (conditions). The formulation of the gradient of the objective function [i]f(x, y)[/i] and the gradients of the equality constraints function [i]g(x, y) = 0[/i] results in the Lagrangian function of stationary points 

[list][i]L(x, y, λ) = f(x, y) − λg(x, y)[/i][/list]

for the variables [i]x[/i] and [i]y[/i] of [i]n[/i]-dimensions and the Lagrange multiplier [i]λ[/i]. The calculation of the gradient of [i]L(x, y, λ)[/i] and its optimisation (where it equals zero, without explicit parameterisation in terms of the constraints) results in critical points at the optimum (local and global optima) and saddle points. The gradient (∇ of a function as its partial derivatives with respect to its variables or directions, as symbolised with the ∂ that is analogous to d of the total derivative), divergence and rotation operations in vector calculus are applied to vector fields with a domain of [i]n[/i]-dimensions and a codomain of [i]m[/i]-dimensions. Suppose a vector-valued function [i][b]f[/b][/i], such that each of its first-order partial derivatives exist in the [i]n[/i]-dimensional space, accepts an argument [i][b]x[/b][/i] that is an element (member) of that space to produce [i][b]f[/b]([b]x[/b])[/i] in [i]m[/i]-dimensional space. The matrix [b][i]J[/i][/b] (the Jacobian, named after Carl Gustav Jacob Jacobi) of [i][b]f[/b][/i] is defined to be an [i]m[/i]×[i]n[/i] matrix (number of elements in a column by number of elements in a file) whose [i](i, j)[/i][url=https://www.nationstates.net/page=dispatch/id=1113411#NE]th[/url] element entry is the [i]i[/i]th partial derivative of [i]f[/i] (element in a file) with respect to the [i]j[/i]th variable [i]x[/i] (element in a column). The matrix, a file of column vectors, represents the differential of [b][i]f[/i][/b] at every point [i][b]x[/b][/i] where [i][b]f[/b][/i] is differentiable. For a column matrix of a displacement vector ([i][b]y[/b][/i] − [i][b]x[/b][/i]), the optimal linear approximation of [i][b]f[/b]([b]y[/b])[/i] is

[list][i][b]f[/b]([b]x[/b]) + [b]J[/b]([b]x[/b]) ⋅ ([b]y[/b] – [b]x[/b])[/i] [/list]

(the product of matrix multiplication of a [i]m[/i]×[i]n[/i] matrix and a [i]n[/i]×1 matrix is a [i]m[/i]×1 matrix). If [i]m = n[/i], then the Jacobian matrix is a square matrix so its determinant can be known. The determinant encodes properties of the linear transformation described by the matrix, i.e. the [i]n[/i]-dimensional volume scale factor. In a transformation with a continuous bijective correspondence, the determinant is the factor applied to the differential for the change of variables of coordinate systems in an integral. The polymath John von Neumann ([i]né[/i] János) contributed to linear programming (optimisation of a linear objective function, subject to linear equality and linear inequality restrictions), [url=https://www.nationstates.net/page=dispatch/id=1362993#GT]game theory[/url], cellular automata, computer architecture and quantum mechanics. 

[anchor=EM][size=100][b]Electromagnetism[/b][/size][/anchor]

In electromagnetism (electrodynamics), Gauss discovered two of the four partial differential equations (with integral forms in vector calculus by Oliver Heaviside, whose function is related to the unit impulse of Paul Dirac that is important in signal processing) published by [url=https://www.nationstates.net/page=dispatch/id=1378729]James Clerk Maxwell[/url]. The first theorem describes the static electric field and the electric charges that cause it, whereby a static electric field is directed from positive charges and to negative charges. The net flux (divergence) of the electric field through any closed surface (opposite of one that is open, overt or apert) is proportional to the local density of charge of the surface, irrespective of its distribution. This can be derived from the inverse square law of Charles-Augustin de Coulomb that quantifies the magnitude of the electrostatic force between two electric charges. The force, by a constant (the reciprocal of the product of the absolute dielectric permittivity of the vacuum of free space, or the product of the magnetic permeability, the quadrate of the celerity of light, and 4π that is related to the polarisability or susceptibility of responsive polarisation by the relative permittivity), is directly proportional to the product of the magnitudes of charges and inversely (reciprocally) proportional to the quadrate of the (radial) distance that separates their centres. Compare this to the geometric dilution, diffusion, propagation, emission and radiation from a central point as a sphere in three-dimensional space where the intensity or density of lines of (luminous or radiant) flux in the (emitted, transmitted, received, reflected, excited or incident) divergence of a vector or energy field as illuminance or irradiance (which are related to luminance or radiance). This attraction and repulsion of charged particles is analogous to Newton's law of universal gravitation for point particles of mass. The second states the magnetic field in materials is caused by dipole configuration (e.g., a circuit ring with current). That is, the its divergence (the total flux) through a surface is equal to zero. The third describes induction (i.e., a magnetic field that varies in time induces an electric field that varies in space, and vice-versa) discovered by Michael Faraday. 

The synchronous and induction (or asynchronous) motors of Nikola Tesla contributed to the modern polyphase system of electrical energy with its alternative current. Inductance was independently discovered by Joseph Henry. His work would be practical for [url=https://www.nationstates.net/page=dispatch/id=946041#Local]telegraphy[/url]. The [url=https://www.nationstates.net/page=dispatch/id=1378729#Indust]work[/url] per unit charge necessary for the motion of a charge around a closed loop equals the rate of change of the magnetic flux contained by the surface. Its notation may use a rotational vector operation to describe the infinitesimal circulation of a field. Heaviside used the term reluctance to describe the magnetic resistance of magnetomotive force (equivalent to the product of the number of complete revolutions, rotations, circles, throws or wends, which is the index of envelopment of a curve with chiral orientation, and the current in the spiral) divided by magnetic flux (its inverse is permeance as the permeation of magnetic flux, which is the analogue to electrical conductance or the inverse of electrical resistance in electric circuits). Permeability of magnetic circuits is thus analogous to electrical conductivity, magnetomotive force with electromotive force, magnetic field with electric field, magnetic flux density with current density, and magnetic flux with electric current. A magnetic field applied perpendicular to a conductor results in a transverse magnetic force upon the charge carriers (for electric current or the motion of electrons with velocity) that additionally results in an opposite electric force from the distribution of charge and consequential potential difference as the effect.

The fourth (an addition to that of André-Marie Ampère) states a magnetic field can be generated (induced around a closed loop) in proportion by an electric current and a changing electric field (displacement current). In derivation of the electromagnetic wave equation, electromagnetic radiation and optic illumination were unified. As corollaries to Maxwell's equations, the circuit laws of Gustav Kirchhoff (based on the work of Alessandro Volta and Georg Ohm) for a concentration of pieces (components, parameters or elements) described current and potential difference. By the conservation of charge, the sum of the flow currents (a signed positive or negative quantity that reflects direction) at a connection (node, junction or point) is zero. At low frequencies, the sum of the potential differences around any closed ring in a state space (reduced to a finite dimension, such that the partial differential equations of the continuous, infinite-dimensional time and space model of the physical system are ordinary differential equations) is zero. Heinrich Hertz first proved the existence and propagation of the electromagnetic waves that Maxwell predicted in his equations of electromagnetism. In [url=https://www.nationstates.net/page=dispatch/id=395960#AMU]SI[/url], the unit of electric current A (Ampère) is defined by elementary charge (of a positive proton, or the negative of an electron) of 1.602176634×10[sup]−19[/sup] C (Coulomb) per second. The unit of electric potential difference V (Volt) is defined as one J (Joule) of electric potential energy per C of electric charge, where a J is the thermal energy dissipated when one A of current passes through a resistance of one Ω (Ohm, equivalent to the quotient of V and A because potential difference is directly proportional to the product of current and resistance, or the inverse of one S or Siemens of conductance) for one second. The unit of electrical power W (Watt) is equal to the product of V and A. The unit of capacitance F (Faraday) is equal to the quotient of C and V. The unit of inductance H (Henry) is the quotient of Wb (Weber) and A, where Wb is the unit of magnetic flux of the product of V and s or the product of density T (Tesla) and m[sup]2[/sup] (area).

[anchor=Op][size=100][b]Optics[/b][/size][/anchor]

In physics, the quantisation of light by Max Planck resulted in Albert Einstein interpreted the quanta to be photons (particles of an electromagnetic field). He proposed that the energy of a photon is proportional to its frequency, which indicated a wave–particle duality. Energy and momentum are analogously related as temporal frequency and spatial frequency are in special relativity (its spacetime was developed by Hermann Minkowski and its transformations were derived by Hendrik Lorentz, who arrived at the electromagnetic force implied by Maxwell's equations). Louis de Broglie postulated material particles with mass (e.g., an electron) possess wave properties. The quantised orbits correspond to discrete energy levels of the atomic model of Niels Bohr, which improved upon that of Ernest Rutherford. The resemblance of mechanics and optics became stronger. The principle of Pierre de Fermat connects geometric (ray) optics with physical (wave) optics as an analogy of the principle of minimal action. The propagation consequences of this principle is that the proportion of the sines of the angles of incidence and refraction is equivalent to that of the velocities of phase (or wavelengths) in two isotropic media, and that the angle of incidence equals the angle of reflection at the interface or surface. 

Lenses are typically spherical such that the curvature of the two optical surfaces are spheres (convex, concave or planar) with a central axis. A biconcave or plano-concave lens diverges a collimated light (i.e., is negative). A biconvex or plano-convex lens converges a collimated light (i.e., is positive) to a focal point or focus. The [url=http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html]equation[/url] for reciprocal of the focal length [i]f[/i]

[list]1 / [i]f[/i] = ([i]n[/i] − 1)(1 / [i]R[/i][sub]1[/sub] − 1 / [i]R[/i][sub]2[/sub] + [i]d[/i]([i]n[/i] − 1) / ([i]n[/i][i]R[/i][sub]1[/sub][i]R[/i][sub]2[/sub]))[/list]

for the refractive index [i]n[/i] (which is equal to [i]c[/i] / [i]v[/i], or the celerity of light divided by the velocity of phase) of the lens material, the radius of curvature of the lens surface [i]R[/i][sub]1[/sub] in the vicinity to the light, the radius of curvature of the lens surface [i]R[/i][sub]2[/sub] not in the vicinity to the light, and [i]d[/i] is the breadth of the lens (the distance along the lens axis between the two surface vertices). Convex surfaces are indicated by [i]R[/i][sub]1[/sub] > 0 and [i]R[/i][sub]2[/sub] < 0. Concave surfaces are indicated by [i]R[/i][sub]1[/sub] < 0 and [i]R[/i][sub]2[/sub] > 0. Lens are either convergent (f > 0) or divergent (f < 0). Magnification is equal to −[i]d[/i][sub]i[/sub] / [i]d[/i][sub]o[/sub] for the distance of the object to the lens [i]d[/i][sub]o[/sub] and the distance of the image to the lens [i]d[/i][sub]i[/sub]. This is equivalent to the proportion of the height of the image and the height of the object. The angular magnification of a telescope is the relation of the focal length (of the objective lens in a refractor or of the primary mirror in a reflector) and focal length of the ocular lens. In a microscope it is the relation of the focal length of the objective and the distance between the focal planes of objective and ocular lenses. For negligible [i]d[/i], the dioptric potency 1 / [i]f[/i] is equal to 1 / [i]d[/i][sub]o[/sub] + 1 / [i]d[/i][sub]i[/sub]. This is the Gaussian lens formula. The composite focal point of lenses in contact is the sum of the dioptric potency. In optometry, a corrective lens (for [i]oculus dexter[/i] or "right eye" and [i]oculus sinister[/i] or "left eye" in the perspective of the person) is prescribed, constructed and dispensed with a spherical correction in dioptric potency (positive for convergent and negative for divergent lenses). Dissimilar to a cylindrical lens for the cylindrical correction of an astigmatism (deviation) that focuses light into a line not a point, a spherical lens has equal (uniform) curvature and dioptric potency in all directions (meridians) perpendicular to the optical axis. There are two notations (conventions) for a prescription where the cylindrical correction is either plus cylinder (more convergent) or minus cylinder (more divergent) relative to the spherical correction (sphere). To convert (calculate a conversion) of the spherical correction, add the sphere and cylinder numbers (values). Then invert the axis value and add 90° (subtract 180° from the result if it exceeds 180°). An axis of 90° is vertical, whilst 0° or 180° are horizontal. A positive cylindrical dioptric potency is most convergent 90° from the axis, whilst a negative one is most divergent 90° from the axis.  If it is zero, the lens is spherical. In photography, the reciprocal of the relative aperture is pupil diameter of the effective aperture [i]d[/i] divided by the focal length [i]f[/i]. The illuminance of the projected image relative to the luminance in the field of view (vision) reduces with the quadrate square of this focal number [i]N[/i]. The profundity of the camp ([url=https://graphics.stanford.edu/courses/cs178-10/]depth of field[/url]) of an objective lens for acceptable focus is approximately proportional to the number (for a circle of confusion of an image, or its conjugate scaled by magnification, a focal length and a distance to an object in the focal plane). Lenses (physical or geometric) are a perspective transformation from real object space to virtual image space.  As surfaces, they are subject to radiant, spectral or luminous exposure (fluence) from the energetic and photic flux of irradiance and the illuminance. Luminance (luminous intensity, which is analogous to radiant intensity) is the luminous flux per steradian (the three-dimensional analogue to the two-dimensional radian), which is different than the radiant flux (the radiant power or energy per unit of time) of the total electromagnetic radiation (not solely the visible spectrum).

In lenses, the focal distance to the focus where axial light from infinity converges (or appears to, if it diverges) depends on the concave or convex radial curvature of the surface of the refractive material. The formation of real or virtual images depends the curvature of the lens and the location of the object relative to the focal distance. Real images are formed by the convergence (as opposed to the extension of the divergence for virtual images) of light rays that can project at a real location. In vision, the eye (composed of the cornea, sclera, iris, pupil, ciliary muscle, sphincter, etc.) functions as a lens, receiving light (illumination through an aperture in dilation) as visual stimulus from the from the external world of observation. It projects a scale replica of this visual field onto the retina at the rear of the eye. At the retina the transduction of the visual signals to neural signals occurs. These ocular sensory data (information) connect by the optic nerve fibres to neural circuits and cerebral structures for filtering and processing. The detection of objects and motion is by the apparent contrast generated, or the difference of light luminance or colour (wavelength, or the inverse frequency, that is represented with the symbol λ or lambda, which indicates the not related [url=https://www.nationstates.net/page=dispatch/id=1167374]eigenvalue[/url]). The unique-lens reflex (reflection of light by a mirror at an 45 degree angle, with a projection to a pentaprism for internal reflection to be viewed as an appearance in the ocular lens) camera is popular in photography. The photographic camera functions control light sensitivity of the film or sensory matrix (transducers), obturator velocity (duration of exposure to the light projected by objective lens), and aperture (the diaphragm of the objective with a diameter in terms of focal distance that controls exposure to light and profundity of field). The transducer to the image medium is an analogue transparent plastic substrate as a latent photochemical image in a colloid suspension, or a digital photoreceptor of metal–oxide–[url=https://www.nationstates.net/page=dispatch/id=1378729#Electro]semiconductor[/url] (MOS) capacitors or transistors that represents the image as pixels in the (magnetic disc or band, or electronic solid-state) memory of the integrated circuit. These sensory detectors replaced and substituted the cathode ray tubes (a dissector that focuses the light or photons of a scene onto the photocathode that emits electrons in the photoelectric effect where the magnitude of the electric current at the anode is proportional to the luminance of the image) in videographic cameras. A charge-coupled dispositive transfers photogenerated electric charge between capacitors that represent pixels. In a similar [url=https://meroli.web.cern.ch/lecture_cmos_vs_ccd_pixel_sensor.html]conversion[/url] of radiation (from the detection of a photon to the generation of an  electron in a current with a photodiode), a transistor of an active (i.e., with amplification and not passive) pixel image sensory detector converts electric charge per pixel. Both technologies convert the charge to a potential difference of the junction. 

Diffraction is a phenomena occurring when the propagation of waves encounters the geometrical shadow or [i]umbra[/i] of an obstacle or aperture. Interference from superposition results in maxima and minima. The Huygens–Fresnel principle states that each point on a primary wave is a source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The diffraction equation of Kirchhoff is derived from the wave equation, which is proportional to the product of the Laplacian or Laplace operator ∇[sup]2[/sup] of the displacement [i]u[/i]([i]x[/i][sub]1[/sub], [i]x[/i][sub]1[/sub], …, [i]x[/i][sub][i]n[/i][/sub]; [i]t[/i], where the nabla ∇ is the vector of the partial derivatives of the [i]n[/i]-dimensional coordinates with the canonical or natural basis of unit vectors. The Laplacian ∇[sup]2[/sup] is the the scalar product ∇ · ∇, or the divergence of the gradient of a function. The vector product (notated with ×) of the ∇ with a vector field is the rotational vector operation (rotation). The approximation of the diffraction equation in the far-field region (With a [i]r[/i] as the distance from radiation, from two wavelengths or 2λ to infinity or ∞) is referred to as Fraunhofer diffraction (named for Joseph von Fraunhofer), whilst in the near-field region (within one wavelength or λ, where the frontier of the reactive region is [i]r[/i] = λ / 2π and that of the radiative region is [i]r[/i] = λ) it is referred to as Fresnel diffraction. In diffraction and antennae, the distinction is defined by the distance 2[i]D[/i][sup]2[/sup] / λ for diameter [i]D[/i]. In optics, Augustin-Jean Fresnel, Christiaan Huygens, Thomas Young and Robert Hooke are credited for advancing the wave theory of light (in contrast to the particle theory of Newton) that was subsumed in Maxwell's electromagnetic equations. 

[anchor=Stats][size=100][b]Statistics[/b][/size][/anchor]

The empirical equation of state for ideal gases conditions that the product of the pressure (in [url=https://www.nationstates.net/page=dispatch/id=1378729#Mater]Pascals[/url]) and volume (in cubic metres or multiples of 1000 litres) is equal to the product of the quantity of (material) substance, the Boltzmann constant, the Avogadro constant, and the absolute (thermodynamic) temperature (in [url=https://www.nationstates.net/page=dispatch/id=1511576#Stats]Kelvins[/url]). Absolute zero is a temperature of −273.15° centigrade, with the freezing (congelation) and boiling (fervent vapour) points of water at 0° and 100°, respectively measured and indicated by a thermometer (with milligrade degrees, these corresponds to 0° and 1000°). The equation is a combination of the relations studied by Robert Boyle (the inverse proportionality of pressure and volume), Jacques Charles (the direct proportionality of volume and temperature), Joseph Louis Gay-Lussac (the direct proportionality of pressure and temperature), and Amedeo Avogadro (the direct proportionality of volume and the number of molecules). The relations of partial pressures and volumes (of mixture as a summation of individual components, respectively studied by John Dalton and Émile Amagat), and the relation studied by Thomas Graham (the inverse proportionality of the velocity of diffusion and the quadrate radix of the mass density) are complementary to this general (universal) approximation of real gases with its intermolecular interactions for which Johannes Diderik van der Waals (who was influenced by Maxwell) modified in a physical and chemical formulation. 

Maxwell with Josiah Willard Gibbs and Ludwig Boltzmann invented statistical mechanics. Gibbs (who independently invented vector calculus) proposed thermodynamics (founded by Sadi Carnot, who is named after the Persian poet [url=https://www.nationstates.net/page=dispatch/id=1106976]Saadi of Shiraz[/url]; James Watt, James Joule, and William Thomson (the Baron of Kelvin) are important in this discipline, amongst others) as the consequence of the statistical properties of ensembles of the possible macrostates of a physical system composed as a collection of a multitude of particles assigned with probabilities. In his elucidation of irreversibility physical processes in probabilistic terms, he generalised the interpretation of entropy (introduced by Rudolf Clausius) for an arbitrary ensemble with all possible microstates and their correspondence of probabilities, which would influence the [url=https://www.nationstates.net/page=dispatch/id=1282502#ET]information theory[/url] of Claude Shannon. In emulation of the form 

[list][i]S[/i] = [i]k[/i][sub]B[/sub] ln [i]Ω[/i][/list]

by Boltzmann and Gibbs as the expression for thermodynamic entropy [i]S[/i], [i]Ω[/i] (Omega) the number of equally probable microstates that correspond to the thermodynamic macrostate, and constant [i]k[/i][sub]B[/sub] (named for Boltzmann and equal to 1.380649×10−23 J⋅K[sup]−1[/sup], for a unit K or Kelvin for thermodynamic temperature) in a statistical micro-canonical ensemble (system), Shannon formulated information entropy [i]H[/i] as the summation from [i]i[/i] = 1 to [i]i[/i] = [i]n[/i] of

[list][i]p[/i][sub]i[/sub] log[sub][i]b[/i][/sub](1 / [i]p[/i][sub]i[/sub])[/list]

for the probability [i]p[/i][sub]i[/sub] of the [i]i[/i]th element in the message space with a cardinality of [i]n[/i] for a logarithmic base [i]b[/i] (equal to 2 if the unity of entropy is in bits, or [i]e[/i] for the natural logarithm ln). The probability distribution [i]p[/i][sub]i[/sub] is equal to 1 / [i]n[/i] when any element in the message space is of equal probability. The information entropy is the expected value of [url=https://www.nationstates.net/page=dispatch/id=1378729#Neuro]information content[/url] (a value that is probable to occur as an event it contains). It is per symbol of communication and is inversely proportional to its frequency or certainty of occurrence. The functional form resembles the decadic logarithm for the relative unity of the bel (B, named for Alexander Graham Bell) that is a logarithmic quantity or "level". For a quantity [i]y[/i] directly proportional to power (e.g., sonic or acoustic intensity, luminous intensity, and energy density) or a quantity [i]x[/i] whose square of which is directly proportional to power (e.g., amplitude of electric potential or current for a constant impedance, sonic or acoustic pressure, and charge density), a level is defined as 

[list][i]L[/i] = log[sub]10[/sub]([i]y[/i] / [i]y[/i][sub]0[/sub])[/list]

for a reference value [i]y[/i][sub]0[/sub]. Since [i]y[/i] is proportional to [i]x[/i][sup]2[/sup] as a function of time [i]t[/i],

[list][i]L[/i] = log[sub]10[/sub]([i]x[/i][sup]2[/sup] / [i]x[/i][sub]0[/sub][sup]2[/sup]) = 2 log[sub]10[/sub]([i]x[/i] / [i]x[/i][sub]0[/sub]).[/list]

An alternative unit is the neper (Np, named for John Napier, the inventor of logarithms) where the logarithmic base is [i]e[/i] (natural logarithm ln) not 10, such that the level is equal to:

[list]ln([i]y[/i] / [i]y[/i][sub]0[/sub]) = ln([i]y[/i]) − ln([i]y[/i][sub]0[/sub]).[/list]

where one Np is equal to 2log[sub]10[/sub]([i]e[/i]) B and one B is equal to (1 / 2)ln[/sub](10) Np. These units are without a dimension. They are related to the logarithm of the relation of frequencies (a decade with base 10 and an [url=https://www.nationstates.net/page=dispatch/id=1288113#Music]octave[/url] with base 2).

The communication of a message can be modelled by its transmission space where it is encoded and transmitted as signal sequence by the transmitter for the channel. The introduction of a disturbance is represented as a conditional probability. The signal of the message is received and decoded in the reception space where it is estimated by the receiver for interpretation. The joint entropy [i]H[/i]([i]X[/i], [i]Y[/i]) is determined by the joint probability of two discrete aleatory variables [i]X[/i] and [i]Y[/i] with possible values [i]n[/i] and [i]m[/i] and expected values [i]E[/i]([i]Y[/i]) and [i]E[/i]([i]Y[/i]). For equivocation, or conditional entropy, entropy of [i]Y[/i] conditioned on [i]X[/i] [i]H[/i]([i]Y[/i]|[i]X[/i]) is equivalent to [i]H[/i]([i]X[/i], [i]Y[/i]) − [i]H[/i]([i]Y[/i]). Transinformation, or mutual information [i]I[/i], measures the reduction of uncertainty of one variable (a signal) in the observation of another. It is symmetrical in properties where: 

[list][i]I[/i]([i]X[/i], [i]Y[/i]) = [i]I[/i]([i]Y[/i], [i]X[/i]) =[i]H[/i]([i]X[/i]) − [i]H[/i]([i]X[/i]|[i]Y[/i]) = [i]H[/i]([i]Y[/i]) − [i]H[/i]([i]Y[/i]|[i]X[/i]) = [i]H[/i]([i]X[/i]) + [i]H[/i]([i]Y[/i]) − [i]H[/i]([i]X[/i], [i]Y[/i]).[/list]

Signals are mathematical functions with a continuous or discrete dependent value (real, imaginary or complex with magnitude and phase) and a continuous or discrete independent variable (temporal, spatial or dimensional). These variable coordinates of time or space are discrete if they are integers and are continuous if they are real numbers. Discretisation in the amplitude (magnitude) of the signal is referred to as quantisation. Discretised digital indices of the variables of the analogue signal are samples. According to the Nyquist–Shannon sampling theorem (named for Claude Shannon, who [url=https://spectrum.ieee.org/geek-life/history/a-man-in-a-hurry-claude-shannons-new-york-years]encountered[/url] with Turing, and Harry Nyquist, who worked with Hendrik Bode), the sampling frequency must be greater than twice the maximum frequency of the original signal (20 kHz for human audition) to be reproduced (reconstructed), which is restricted in spectral bandwidth with the passage of an anti-aliasing filter that attenuates frequencies greater than the sampling frequency. This filter is typically a low-pass filter, whose ideal is a rectangular function in the frequency domain that is linear, is time-invariant, and attenuates to zero amplitude (from one) at half the sampling frequency, or [url=https://www.nationstates.net/page=dispatch/id=1378729#Maths]π[/url] radians rotational (angular or circular) frequency [i]ω[/i] (omega) (2π[i]f[/i], for temporal frequency [i]f[/i] in Hz or Hertz and reciprocal seconds, which is equal to one cycle per second or s as defined by 9192631770 cycles of the hyperfine structure transition frequency of caesium-133 atoms, or stable isotopes with 55 protons and 78 neutrons in an atomic nucleus). Its time domain impulse response is a cardinal sine function or sin(π[i]t[/i]) / π[i]t[/i] in time [i]t[/i]. A filter is implemented as the convolution of the signal with the impulse response. A linear contraction (or expansion) in the time domain corresponds in a duality to a linear expansion (or contraction) in the frequency domain. The sampling operation is mathematically equivalent to the multiplication of the signal with the sampling function, which is a serial pecten (a "comb" or "train") or periodic sum (where a period is the inverse of frequency) of an infinite sequence of Dirac impulses. 

[url=https://spectrum.ieee.org/view-from-the-valley/computing/software/a-madefortv-compression-algorithm]Data compression[/url] (reduction by a code of codification) is a discipline of computation in the information technology sector. The evaluation of its efficacy and efficiency are defined in terms of the emphasis of the practice of the practician and the theory of the theoretician: the practical compression time and the theoretical compression tax. This rate, ratio, division or relation that corresponds to the complexity of the signal or ensemble of data and media, and algorithm or implementation is the metric or measurement of the relative reduction in quantity (the "tally" or "taille", as the magnitude, dimension and proportion) of the data representation produced. Modes of registration and presentation of information include text, pictures, forms, artefacts, images, literal and graphic documents, visual video and aural audio in the registers of the medium of memory. The two classes of data compression algorithms are reversible and irreversible (with and without perdition, distortion and degradation). The primary computes a statistical model of the data and then transforms data such that "probable" (encountered with a frequency of occurrence) data is assigned shorter bit sequences or chains than "improbable" data using entropy coding (encoders and decoders, e.g. arithmetic coders process, and the process of David Hoffman). The optimal code length for a symbol in the method of Shannon is –log[sub][i]b[/i][/sub]([i]P[/i][sub][i]i[/i][/sub]), where [i]b[/i] is the number of symbols and [i]P[/i][sub][i]i[/i][/sub] is the probability of the symbol [i]i[/i]. For [i]b[/i] = 2, there are 2[sup][i]n[/i][/sup] potential levels (amplitudes, phases or frequencies) are representative of symbolic signals communicated (systemic information transferred) with [i]n[/i] [url=https://www.nationstates.net/page=dispatch/id=1378729#Neuro]bits[/url] (binary digits) per pulse or symbol in a temporal unit interval or duration of seconds. The method of Robert Fano orders the symbols by the order of probability and divides two (binary or dyadic) ensembles with approximately equal total probabilities with a successive determination, distribution and allocation of digital codes. The method of Hoffman uses this data structure as a arboreal boom that inverts the direction of the division from the radical root to the foliate leaves, whilst resulting in optimal prefix codes. It creates a node for each symbol in a queue where probability (frequency of occurrence) corresponds to priority. Whilst there is more than one node in the queue, the algorithmic process:
[list=1]
[*]Removes the two foil nodes of minor probability from the queue;
[*]Adjoins 0 and 1 as prefixes respectively to any code already assigned to these nodes;
[*]Creates a new internal node with these two nodes as progeny and with probability equal to the sum of the probabilities of two nodes;
[*]Adds the new node to the queue.
[/list]
The residual node (with major probability) is the radix node. Other algorithms (i.e., Abraham Lempel and Jacob Ziv) digest a stream of data with the substitution of repeated occurrences of data (with the unit of bits) with a reference to their position in an associative table of fields (collection of attributes, names or keys in a finite domain) as a correspondent ensemble of values. The generated or constructed models of estimated or measured statistics are either static (modular) or dynamic (adaptive). The secondary approximates (inexact and imperfect, not exact or perfect) a duplication (regeneration, reproduction, reconstruction and recreation) of the original digital data (information) by a cycle of transformation (a function and conversion) of compression and expansion. Dissimilar to reversible compression, these processes result in artefacts (discernible, perceptible, distinguishable and visible effects, e.g. temporal and spatial aliases).

[anchor=QM][size=125][b]Quantum Mechanics[/b][/size][/anchor]

[url=https://link.springer.com/article/10.1007/s40509-014-0008-4]Quantum mechanics[/url] (in the description of state in a physical system with a function of the superposition of vectors) combines a probabilistic (stochastic) interpretation with deterministic dynamics in evolution. The description of the possibilities of an abstract system is a representation of Nature (natural reality). This cosmic physical theory postulates that the world of local experience exists as one of multiple parallel worlds of reality. In the act of measurement ([url=https://link.springer.com/article/10.1007/s41470-019-00031-6]registration[/url] of experimentation and observation), the transition from the "possible" to the "actual" occurs in the interaction (connection and relation) between the object and the subject. It represents a collapse or reduction of the function to an eigenstate (i.e., an observable property or characteristic including position, momentum and energy, with the transfer of the latter associated with causality in spacetime). For example, a photon of light affects the properties of the phenomenon (e.g., an electron) and the value of the quantity measured (observed and experienced). The measurement of the location (certain position) of light (photons) or current (electrons), which propagates as a wave with amplitude, interference and diffraction, collapses its undulation (und, wave, waw or billow) function of degrees of freedom (that describe the states of vibration in the quantum system) and exhibits comportment similar to a particle. The experiment of Albert Michelson and Edward Morley determined no evidence for the existence of the luminiferous aether, the supposed medium for light. This result initiated research in special relativity. Enrico Fermi was the first to realise that the mass–energy equivalence possessed consequences of energetic radiation from the radioactivity of nuclear fission. Einstein formulated this in his theory of special relativity for energy as [i]E[/i], mass [i]m[/i] and the celerity of light [i]c[/i]:

[list][i]E[/i] = [i]m[/i][i]c[/i][sup]2[/sup]. [/list]

From this relativistic physics, Erwin Schrödinger published his diffusion equation for the probability amplitude to describe the state function of a quantum-mechanical system. Werner Heisenberg introduced his alternative and equivalent formulation of quantum mechanics with matrix mechanics. He would develop a principle that asserts the fundamental limit to the certainty with which the values for the complementary physical quantities of position and momentum of a particle in motion. Richard Feynman also introduced his path integral formulation where there are an infinity of possible trajectories of action. A graphical diagram (a method presented by Feynman and Dyson) represents the contribution of perturbations to the transition amplitude probabilities for a quantum system from the initial to the final state. Wolfgang Pauli formulated a quantum mechanical principle that conditions, for two or more identical (indistinguishable or indiscernible) particles with a half-integer gyration as an intrinsic form of angular momentum, it is impossible for them to occupy the same state in a quantum system simultaneously. This exclusions extends to  leptons (elementary particles such as electrons and neutrinos) and baryons (composite particles that are a type of hadron such as protons and neutrons). A photon, which possesses zero mass, is not included because it mediates force and interactions, not generations. Particles such as these posses an integer gyration and the property of a symmetric wave function. Two electrons in the same atomic orbital have equal values for their quantum numbers, principal quantum number, azimuthal quantum number and magnetic quantum number. They do not have an equal quantum number that indicates the gyration and its orientation as a vector. Their charge is the third degree of freedom in their state. The total wave function for multiple these particles is antisymmetric. Elementary particles are fundamental and material constituents. Each particle associates with an antiparticle with equal mass and opposite charge. The superposition principle (i.e., where a linear combination of solutions to a linear equation is a solution of it) is applicable to the vectors of quantum states. The configurations of particles in a general state of a system are specified by [url=https://www.nationstates.net/page=dispatch/id=1167374]complex numbers[/url] (a phase vector or complex amplitude) as coefficients. This is analogous to probability distribution in statistics where for the probabilities of mutually exclusive events total (sum) to unity (the probability of their union or disjunction). Max Born described the absolute value (modulus or magnitude) of a complex number, where

[list][i]r[/i] = |[i]z[/i]| = √([i]x[/i][sup]2[/sup] + [i]y[/i][sup]2[/sup])[/list]

and the tangent of phase is 

[list]tan [i]φ[/i] = sin [i]φ[/i] / cos [i]φ[/i] = [i]y[/i] / [i]x[/i],[/list]

is the product of it 

[list][i]z[/i] = [i]x[/i] + [i]i[/i] [i]y[/i] = [i]r e[/i][sup][i]iφ[/i][/sup] [/list]

and its complex conjugate 

[list][i]z[/i]* = [i]x[/i] − [i]i[/i] [i]y[/i] = [i]r e[/i][sup]−[i]iφ[/i][/sup][/list]

(with a notation not to be confused with a conjugate transpose of a matrix with real and imaginary numbers as complex elements from [i]m[/i]×[i]n[/i] to [i]n[/i]×[i]m[/i]), or the squared (quadrate) of the probability amplitude is the probability (continuous density, in contrast to discrete mass) that physical particles are in a spatial configuration, position or situation at a temporal instant. For electrons, superposition manifests as the physical interference phenomenon of amplitude in the double-slit (fissure) experiment of Young. 

[anchor=Mag][size=100][b]Magnetism[/b][/size][/anchor]

Electric circuits with resistive, capacitive and inductive elements connected by conductive material to sources of current and potential difference are, in general, linear and thus analysable by superposition. These for electric charge are [url=http://hyperphysics.phy-astr.gsu.edu/hbase/electric/watcir2.html]analogous[/url] to the volumetric flow and pressure of a fluid. If linear, the output (produced signal) of a circuit, whose input is a linear combination of contributing signals, is equal to the linear combination of outputs produced by the separate input of the contributing signals. A response to plural stimuli (the amplitude or magnitude of an effect resulting from the signals of the causes) is equal to the sum of the individual responses to each stimulus (a linear function). The superposition principle corresponds to an additive and multiplicative (homogenous) function. Linear systems are governed by linear differential equations (gradient polynomials that relate functions and their derivatives). The phenomenon of diffraction and interference (a distortion or disturbance), which is constructive when the phase difference between the waves in propagation is a parity (multiple divisible by two) of π radians and is destructive when the difference is an imparity (multiple indivisible by two) of π radians, is an example of superposition. Periodic functions can be represented (decomposed in analysis and composed in synthesis, analogous to a transformation) as an infinite series (a finite sum in approximation) of component sinusoids (harmonic sine and cosine functions with coefficients and integer multiples of the fundamental frequency, which can be represented on a circle as phase and magnitude spectrum). In colloquial and electrotechnical speech, declaring someone "non-linear" amounts to an insult of their intelligence (e.g., calling them "mad, stupid, moronic, psychotic, insane, inane" or a "fool, imbecile, idiot, maniac" for dementia and an absence of reason or "wit" similar to the irrational incoherence in intellect of the logical error and formal fallacy of a [url=https://www.nationstates.net/page=dispatch/id=1346583#Sci][i]non sequitur[/i][/url]) because of the difficulty of resolving these systems. Electronic circuits, typically non-linear, consist of these passive components (conductors, resistors, capacitors, inductors, and diodes) and the active components of transistors and amplifiers. Impedance is a complex number (with a magnitude and polar phase angle) of real resistance (i.e., of a resistor) and imaginary reactance (i.e., of a capacitor or inductor). The quantity of charge reserved relative to potential difference is capacitance (cf. current with respect to potential for conductance and with magnetic flux relative to current for inductance, or the electromotive force generated to oppose a change in current with respect to time).

A spiral or helical spool (roll or volute) of insulated or isolated conductive filament is wound (involved or enrolled) around a magnetic core (kernel or nucleus) to filter high-frequencies ("noise") as a passive low-pass filter (the filtration and attenuation of electromagnetic radio-frequency interference greater than a "cut frequency" with a response that passes or permits continuous current and low-frequency alternative current). The magnet is typically ferrite (a ceramic material or iron or ferric oxide in a composite with metal oxides). They are ferrimagnetic, which is type of spontaneous magnetisation distinct of ferromagnet where all the magnetic moments of a material are aligned (i.e., none are in the opposite direction). Their electrical resistance (the resistive reciprocal of conductance, which is the real component of the complex admittance with imaginary susceptance or permittance) diminishes induced parasitic currents of planes perpendicular (in a direction that opposes, which was formulated by Heinrich Friedrich Emil Lenz and was discovered as a phenomenon by Jean Bernard Léon Foucault) to a changing flux of a magnetic field. The magnetic coercivity categorises the ability of a ferromagnetic material to not become demagnetised in the application of an external magnetic field (confer with electric coercivity as analogous for the ability of a ferroelectric material to not become depolarised in application of an external electric field). Opposite of the substrate of magnetic bands, a strate of ferrite metal particles is used for the recording of information. The aleatory (direct, as opposed to sequential) access memories of computers used ferrite where magnetic hysteresis permitted record of a state as one bit of information (determined by the chiral direction of the magnetisation) in [url=https://www.nationstates.net/page=dispatch/id=1378729]non-volatile[/url] memory. A transformer consists of a primary and secondary spool (each cylinder with a number or quantity of windings) that is wound around a core (toroidal ring). A varied primary current produces a magnetic flux in the permeable core that induces a varied electromotive force (potential difference) for the secondary spool. The secondary current produced creates a magnetic flux equal and opposite to that produced by the primary current. The symmetry of a toroid reduces the perdition of flux and posses a greater inductance that a solenoid. Dissimilar to inductors (reactors), a ferrite filter convert radio-frequency energy to the dissipation of heat. It results in a complex impedance (with the components of resistance, inductive reactance and capacitive reactance) that impedes these signals. An inductor results in an inductive reactance. A conductive cable acts as an antennae that receives interference and transmits emissions as a radiator. The balance of a line or circuit is determined by the equality or symmetry of impedances of the conductors with respect to ground or Earth. It results in the equal exposure of external magnetic fields ([i]campi[/i], plural of [i]campus[/i]) and the induction ("coupling") of a common mode signal.  

[anchor=NMR][size=100][b]Nuclear Magnetic Resonance[/b][/size][/anchor]

Nuclear magnetic resonance (NMR, discovered by Israel "Izzy" Isaac Rabi) is used in magnetic resonance imaging (MRI) for medical and clinical diagnosis and computed tomography (CT). The electrical engineer [url=https://theconversation.com/50-years-ago-the-first-ct-scan-let-doctors-see-inside-a-living-skull-thanks-to-an-eccentric-engineer-at-the-beatles-record-company-149907]Godfrey Hounsfield[/url] invented the principal CT image in the electric medical and musical industries. MRI and X-ray electromagnetic radiation (discovered by Wilhelm Röntgen) facilitate medics in directing therapy or surgery. All nucleons (neutrons or protons as particles of an atomic nucleus) have intrinsic quantum property of gyration. The gyration (angular momentum) is proportionate (∝) to a magnetic dipole moment. These align parallel or anti-parallel in the presence of a magnetic field. The particles precess (in an orientation that is either parallel or anti-parallel to the gyration) around the precessional axis (the direction of the static external magnetic field). The frequency of the precession (named for Joseph Larmor) is proportional to the external magnetic field, which exerts a rotational force on the magnetic dipole moment. Felix Bloch introduced the equations of motion for nuclear magnetisation. The magnetisation (polarisation) consists of longitudinal and transverse components. The particles in space relax (return or recuperate with a time constant of a first-order, linear time-invariant system that is the reciprocal of the relaxation dynamic) to the initial thermodynamic equilibrium state of gyration with a longitudinal magnetic relaxation (parallel to the external magnetic field). In transverse magnetic relaxation (perpendicular to the external magnetic field), the particles relax in alignment (decay to zero) and cease production of the electromagnetic signal with the radio (Lamor) frequency at an oscillation of resonance. 

For liquid materials, the relaxation time constant of the longitudinal relaxation is equal to that of the transverse relaxation. For viscous liquids and solids, the longitudinal relaxation time is greater than the transverse relaxation time. The application of 90-degree pulse of a constant magnetic field results in alignment (magnetisation or polarisation). The transmitted oscillation of the transverse magnetisation induces a current in the receiver as proportional signal. The Lamor frequency is contained in an envelope of the transverse magnetisation that relaxes to zero with the final termination of the pulse. The pulse rotates the longitudinal magnetisation into the transverse plane for detection. The heterogeneity (not homogeneity) of the magnetic field in space results in different gyrations and frequencies of precession. After the 90-degree pulse, the evolution results in dephased gyrations in the transverse plane and reduction of the transverse relaxation time constant. To mitigate for this, an inversion by a 180-degree pulse inverts the longitudinal magnetisation and one component of the transverse magnetisation. If the this pulse occurs at half the time the 90-degree pulse and the "echo" received, the gyrations return to phase so the time constant of the echo formation can be measured. A image is formed from the gradient fields in space of the magnetic field. These gradients selectively excite gyrations with a band of radio frequencies that corresponds to a Lamor frequency. This selective excitation is analogous to a projection. The Fourier Transform of the signal produces a projection of the transverse magnetisation (whose phase is related to the application of gradients) through an object. 

The image obtained and reconstructed from an examination or CT scan of a specimen aids in the detection of regions and margins (edges or borders) of anatomy (tissues in physiology and pathology) by division and segmentation for the classification (supervised determination) of the normality or abnormality of the extractions. Methods include support vector machines, k-means vector quantisations, and k-nearest neighbours algorithms. Tomographic reconstruction depends on the Fourier Transform for analysis and its inverse for synthesis. The inversion produces an image of the function (object) from its projection. Convolution with a kernel [i]h[/i] is equivalent to filtration. This transformation named for Fourier, where augmentation of the spatial or temporal results in reduction of frequency in continuous and discrete domains, and where differentiation and convolution corresponds to the operation of multiplication, is equal to the dimensional finite, definite, and infinite summation ∑ or infinitesimal integration ∫ transformation of the product of a function of one-dimensional time [i]t[/i] or two-dimensional space [i]x[/i] and [i]y[/i] and 

[list]exp(–[i]j[/i]2[i]πft[/i]) or exp(–[i]j[/i]2[i]π[/i]([i]ux + vy[/i])),[/list] 

where exp is [i]e[/i] and [i]j[/i] is the imaginary unit [i]i[/i] (polar [i]e[/i][sup][i]iπ[/i]/2[/sup]) in a variation of 

[list][i]e[/i][sup]–2[i]πi[/i][/sup] = cos 2[i]πi[/i] – [i]i[/i] sin 2[i]πi[/i] [/list]

for frequency [i]f[/i] or frequencies [i]u[/i] and [i]v[/i] from –∞ to ∞ for each dimension. Time and space can be discretised by index of a sequence or series as discrete quantities at indices ([i]k[/i] or [i]n[/i]). The Radon Transform is the line integral of the function, with a ray or line [i]L[/i] at angle [i]θ[/i] ∈ [0°, 180°) at a right-hand chirality from the [i]x[/i]-axis and orthogonal to the [i]z[/i]-axis that is parameterised as 

[list]([i]x[/i]([i]z[/i]), [i]y[/i]([i]z[/i])) = (([i]r[/i] cos [i]θ[/i] – [i]z[/i] sin [i]θ[/i]), ([i]r[/i] sin [i]θ[/i] + [i]z[/i] cos [i]θ[/i])) [/list]

for an arc length [i]z[/i] and a distance from the origin [i]r[/i]. This is the result of the rotation matrix, a transformation for a vector as a column vector. All points on [i]L[/i] satisfy the equation 

[list][i]r[/i] = [i]x[/i] cos [i]θ[/i] + [i]y[/i] sin [i]θ[/i]. [/list]

The projection is equivalent to the double integral (from –∞ to ∞ for d[i]x[/i] and d[i]y[/i]) of the product of the function [i]f[/i]([i]x[/i], [i]y[/i]) and the Dirac delta function 

[list]δ([i]x[/i] sin [i]θ[/i] + [i]y[/i] cos [i]θ[/i] − R)[/list]

such that δ([i]L[/i]([i]R[/i], [i]θ[/i])) is zero except on line [i]L[/i]. The magnitude or norm of the vector function 

[list][i]x[/i]([i]z[/i]) [b]i[/b] + [i]y[/i]([i]z[/i]) [b]j[/b] [/list]

(where unit vectors [b]i[/b] and [b]j[/b] are normalised and orthogonal) is 

[list]√((d[i]x[/i]/d[i]z[/i])^2 + (d[i]y[/i]/d[i]z[/i])^2) [/list]

(in this case it is equal to unity) for d[i]z[/i]. A linear transformation that preserves area, volume or n-dimensional contents (hypervolume of hyperspace in the multiplicity of Euclidean space with hypersurfaces and hyperplanes, which are one less dimension of its ambient space) is absent of distortion. If preservation occurs, the determinant of derivative is equal to one, otherwise it is the scale factor. In one dimension, for a function [i]f[/i]([i]x[/i]) where [i]u[/i] = [i]g[/i]([i]x[/i]), the integral of the product of [i]f[/i]([i]g[/i]([i]x[/i])) and d[i]g[/i]([i]x[/i])/d[i]x[/i] for an interval [[i]a[/i], [i]b[/i]] with the differential d[i]x[/i] is equal to the integral of [i]f[/i]([i]u[/i]) for the interval [[i]c[/i], [i]d[/i]] where [i]g[/i]([i]c[/i]) = [i]a[/i] and [i]g[/i]([i]d[/i]) = [i]b[/i] with the differentials d[i]u[/i]. For two dimensions, a function [i]f[/i]([i]x[/i], [i]y[/i]) and the linear transformations 

[list][i]x[/i] = [i]g[/i]([i]u[/i], [i]v[/i])[/list]

and 

[list][i]y[/i] = [i]h[/i]([i]u[/i], [i]v[/i]), [/list]

its integral over the region [i]R[/i] with the differential d[i]A[/i] of d[i]x[/i] and d[i]y[/i] is equal to the integral over surface [i]S[/i] of the product of [i]f[/i]([i]g[/i]([i]u[/i], [i]v[/i]), [i]h[/i]([i]u[/i], [i]v[/i])) and the Jacobian (partial derivatives of [i]x[/i] and [i]y[/i] with respect to [i]u[/i] and [i]v[/i]) with the differential d[i]V[/i]. Consider how the Jacobian of the linear transformation to polar coordinates 

[list][i]x[/i] = [i]r[/i] cos [i]θ[/i] [/list]

and 

[list][i]y[/i] = [i]r[/i] sin [i]θ[/i][/list]

is equal to [i]r[/i] so 

[list]d[i]A[/i] = d[i]x[/i] d[i]y[/i] = [i]r[/i] d[i]r[/i] d[i]θ[/i]. [/list]

A central theorem states that the one-dimensional Fourier Transform of the projection of a two-dimensional function [i]f[/i]([i]x[/i], [i]y[/i]) to a line by the Radon Transform is equal to the a section of the two-dimensional Fourier Transform of that function that is parallel to the projection line.

Report