by Max Barry

Latest Forum Topics




by The Federal Republic of Alpenburg. . 57 reads.

Electrical Engineering

Application of Mathematics & Science

The discipline of electrical engineering is an ideology of technology. It is an institution and system of concepts (ideas) that is an epistemic philosophy as an epistemology of theories and practises. Its logical structure is a paradigm (the historical episteme of Michel Foucault and the technical artifice of techne), an artificial process of cognition of conscious subjects for interaction (interpretation, evaluation, communication and production) of the objects of natural, universal or physical experience. This discourse is an imagination of existence that relates to the real conditions of existence. Louis Althusser proposes that whilst ideologies possess different forms, their function is similar in history. An ideology constitutes the subject transformed from an individual person. The recognition of the identity of ego by conscience occurs internal of ideology, a model or structure that is impossible to be external in correspondence to an object of reality. Religion and morality are ideologies (as argued by Friedrich Nietzsche), and for many electrical engineers in Atlantis its discipline and principles is their doctrine and creed. Their society (social organisation) is the Institute of Electrical and Electronics Engineers (IEEE, which is a technical and professional association), as the guardians of the order of the electron and the devout of their heroes and saints. Electrical engineering is an application of the pure and fundamental mathematics and sciences of mathematicians and scientists, similar to the medics of medicine with the physical (natural and material) and empirical sciences. The following discussion includes some of the principal concepts whose mathematics are proof of its rites and cults.


Philosophy is a method of conceptual elucidation in science. The momentum of mathematics is frequently motivated by the resolution and application of problems. The impetus of internal modification of mathematics were external necessities and resources. Arithmetic and its study of quantity (i.e., natural, integer, rational, real and complex numbers and their operations) evolved to sustain commerce and taxation. Geometry, the spatial discipline of mathematics that studies the properties of space, originated to progress trigonometry, astronomy, and navigation. It would describe the space that human experience, through the cognisance of perception and conception, occupies, imagines and calculates. As with epistemological subjectivity and objectivity, in science a conflict exists between realists and relativists who respectively argue that the description of the natural world is a true reality or a social construct. Similarly, philosophers debate the existence of mathematical entities is absolute (eternal and abstract ideas, and universal and certain objects) or fallible (corrigible and incomplete beliefs, and revisable and uncertain truths). In these views, mathematics is either discovered or invented. Wittgenstein proposed that mathematics consists of "language games", which are practices governed by rules that provide significance to symbolism of concepts and ideas. These rules (norms) are of traditional, cultural and social origins, not logical necessity. Inspired by the scepticism of Hume, this fallibilism (common to Popper) argued that no mathematical definitions or proofs are final, instead they are only accepted on the basis of authority and not by the conclusive justification of logic or reason.

In the ambiguous fable of Aesop, the hare (or the Celtic Iberian descendants as Latin lepus and Greek λεβηρίς or lebērís, as a relative of the cony) and tortoise race. Its interpretation and moral (satirised by LinkVikram Seth) is the oxymoronic adage to festina lente (σπεῦδε βρᾰδέως or speûde bradéōs) is illustrated with the emblem of the marine dolphin and nautical anchor. Compare this to Zeno's paradox of Achilles and the tortoise, which Cauchy provided a satisfactory definition mathematical limit for the infinite summation of the geometric series as equal to the proportion of the "forespring" or temporal advance of time and the difference of unity and the relation of the spatial celerities of the tortoise and Achilles. The infinite limit n→∞ of a geometric progression (multiplied by a constant a = 1) 1 + r2 + r2 + r2 + … + rn converges a as sequence or series at (1 − rn+1) / (1 − r). For a summation of ark from k = 0 to k = n (i.e., n + 1 terms) where r ≠ 1 (|r| > 1), this approximates the preceding limit. When n = ∞ this evaluates to a / (1 − r). This formula was first described by Euclid in his Elements. Archimedes used it in The Quadrature of the Parabola for a proof that the area of a parabolic segment (a region enclosed or circa by a parabola and a line) is 4/3 that of a specific inscribed triangle by the dissection of the total area as an infinite sum of triangular area. This method of exhaustion evolved into the method of indivisibles and eventually infinitesimal calculus (in particular definite integration). It was applied in the geometry of The Method of Mechanical Theorems (in which he proved the relation of a and b as distances from the fulcrum to points A and B is equal to the relation of the velocities of these respective points, and equal to the force received FB and force transmitted FA by the machine of a lever), On the Sphere and Cylinder (in which he proved the formulae for the surface area and volume of a cylinder and the sphere it contains), and Measure of a Circle (in which he approximated π or the relation of the circumference of circle to its diameter by the inscription and circumscription of similar regular polygons with particular number of lateral edges or consecutive segments that intersect or connect at vertices or points in a intersection or union, whilst proving the area of a circle is the product of π and the quadrate of its radius or half its diameter).

The Germanic Saxon fable of the Brothers Grimm tells similar variation of a senior hare and a commoner hedgehog (urchin and swine igel or egel who consumes turnips). In Atlantean myth, a divine cony with a mortar and pestle is a recognisable (perceivable and conceivable) illusion (i.e. erroneous and incorrect) of a familiar object, profile, figure, image or form as pareidolia of the lunar maria (plural of mare). This psychic phenomenon of a simulacrum (representation and formation in imagination) from the vague, aleatory, indistinct and indeterminate stimulus is analogous to the mythic and iconic constellation. The cony is referred to as a fenek in Maltese from the Arabic فَنَك or fanak for a vulpine fox of the Sahara. A terrier hound chases these animals that burrow (bury and covey) in terrestrial and buccal cubicles, caves, cavities, holes, hollows, warrens or bouns (e.g., a clap(eri)us). The name dassie refers to the hyrax as a Dutch diminutive of das (cf. German Dachs) for badger (or brock from the Celtic) that is known as tasugo in Spanish, teixugo in Galician, and texugo in Portuguese from the Germanic Gothic and related to the Latin (via Celtic tasgos) as in the Italian tasso, Spanish tejón, Catalan teixó, Galician teixo, which is not to be confused with the Scythian-origin taxus for "yew" as in the Spanish tejo. In French this mammal animal is known as blaireau from the Celtic Gaulish or Germanic Frankish blar. The others are related to the Latin tela for "text, textile, tissue, fabric, membrane, web" as in "to weave" or tessere in Italian, tejer in Spanish, teixir in Catalan, tecer in Portuguese and Galician, and tisser in French. It is a relative to technical and architectural production and natural and artificial generation with the Greek τέχνη or tékhnē for the construction of structure and artifice. Erasmus (cognate with the Greek éramai and Sanskirt rámate for "I love") in the Adagia (a record of the humanist sententiae and adages or expressions of abstraction) wrote that "a fox knows many things, but a hedgehog one important thing" (multa novit vulpes, verum echinus unum magnum). This influenced philosophical classification. Hedgehogs that view the world with the lens of a single idea or concept include Plato, Dante, Pascal, Nietzsche, and Proust. Foxes that view the world with multiple experiences or convictions include Aristotle, Erasmus, Shakespeare, and Goethe. Wittgenstein transformed himself as a hedgehog by nature to a fox by intellectual imagination in his philosophic transition. This humorous system of classification is similar to that proposed by LinkFreeman Dyson, which distinguished an avian bird (fowl) from an amphibian frog (toad). A bird views the world as a physical unification of cognitive concepts with the mathematics (an art and a science) of natural philosophy. A frog views the world in observation and experimentation of facts, details and particulars. These equally important perspectives influence the formation of scientific theories. Birds are often mystic, such as Aristotle, Plato, Newton, Kepler and Einstein.

Francis Bacon, who was a figurative frog, first proposed the induction of the scientific method for the investigation of the physical (natural) Linklaws of the world (Nature and Cosmos of factual and not logical verity, universal or statistical expressions, and conditional not categorical conceptions), in contrast to Descartes (a bird) with his deduction. The incomplete utopic novel New Atlantis by Bacon was published posthumously in 11326 HE. It depicts and envisions a society whose foundation is a scientific institution that conducts experiments using the organon (a process, system or method) he proposed in LinkNovum Organum. The cover illustrated a galleon (a naval galley symbolic of empirical investigation and observation in natural philosophy of fact as the mental activity of experience with reason) passing the mythical Columns of Hercules (Hercales) at the Strait of Gibraltar or the ostium of the Mediterranean Sea to the Atlantic Ocean. He proposed a "new organ" of logic and syllogism (conclusions from propositions of notions, presumptions and premises). Bacon divided physical science in natural philosophy as physics (particular and variable causes) and metaphysics (general and constant causes). His method reduces the realm of apparitions to a reality accessible for manipulation. In this reduction of a posteriori induction, general axioms or universal principles are informed by the special particulars of the interpretations, impressions and observations of senses. This contrasts with the a priori deduction of Aristotle, which Bacon criticises as an impediment to natural philosophy. Descartes, a contemporary of Bacon, advanced a rational, theoretical and deductive descent that diverged from this empirical, practical and inductive ascent of Bacon. For Descartes, the objective was absolute verity, whilst for Bacon it was relative order of natural phenomena (causes). Bacon rejected the inferences of essential anticipatio naturae ("anticipation of nature", with its conservative convention, conjecture, computation, prediction, projection, prevision and speculation) in favour of existential interpretatio naturae ("interpretation of nature") from a progressive collection of observable facts and methodical investigation of the complexity of Nature. Bacon argued forms and causes (material or substantial, formal or ideal, kinetic or efficient, and functional or final) are the universal physics of actual effects. Bacon rejected the latter cause in the natural (not the artificial) for its superstitious conflation of theology and teleology in cosmology. The obstacles of critical examination are the idols of the tribe (idola tribus), the idols of the cave (idola specus), idols of the market (idola fori) and idols of the theatre (idola theatri). False idols are intellectual obfuscations and fallacies that originate from the cognitive malalignment of the conceptual reflections of imagination and its predispositions, suppositions and prejudiced generalities. Bacon argued humanity is a servant and interpreter of Nature and its phenomena and qualia.


The German mathematician Carl Friedrich Gauss (Gauß) believed comprehension of Euler's identity to be a point of reference for importance in mathematics. Gauss considered the identity of Euler to be the pons asinorum ("bridge of asses") in mathematics. This name refers to a proposition of geometry by the Greek mathematician Euclid of the city of Alexandria in Ptolemaic Egypt. It states the angles opposite the equal sides of an isosceles triangle are equal. As a metaphor, the name signifies a critical problem or test that functions to distinguish or separate a person by their intelligence. In the mathematical tract Elements (Στοιχεῖον or Stoikheîon), the name Dulcarnon (from the Arabic ذُو ٱلْقَرْنَيْن‎ or Ḏū al-Qarnayn for "he of the two horns" as in Alexander the Great and Cyrus the Great) refers to the Pythagorean Theorem. The three-dimensions of space (with breadth, height and profundity) are defined by three axes with either (1) the Cartesian (named after René Descartes, with a Latin family name Cartesius and whose personal name originates from the Latin Renatus as in "revive, resuscitate, reanimate, renovate, reincarnate, born again" and with the cognate of Renato, first published the system that is fundamental to calculus in mathematics) coordinates of longitude (abscissa or horizontal distance) x, latitude (ordinate or lateral distance) y, and altitude (applicate or vertical distance) z; (2) the cylindrical coordinates of polar radius (radial distance) ρ or r, azimuth (polar angle or angular position) φ or θ, and altitude (axial position or normal distance to the polar plane) z; or (3) the spherical coordinates of polar radius (radial distance) ρ or r, zenith (polar angle, inclination or colatitude, which is 90 degrees or ° and π/2 radians plus the elevation or latitude with respect to the normal axial direction) θ, and azimuth (longitude) φ. Each of the three coordinate systems are related by trigonometric functions of geometry.

In "logical perfection", Gauss was known for his inclusion of synthesis, and omission of analysis. He, before modern invention of the "fast" (divide et impera, or "division and conquest") signal processing algorithm for the discrete Fourier transform (a transformation by Joseph Fourier that decomposes a temporal or spatial function and signal into its constituent domain of frequencies; as a complex function of real variable that transforms real variables, it is similar the complex function of a complex variable that transforms real variables named for Pierre-Simon Laplace, notable for his advance of celestial mechanics), proposed trigonometric interpolation as a method. He studied in astronomy the gravitational and orbital mechanics of the solar system. In proof, his treatment of the optimisation and approximation method of the minimum quadrates (minimisation of error by the sum of the squares of the residual differences of the estivated values and observed data) for a system with more equations than variables (quantitas incognita) that determine it. This he proved with his normal distribution of the probability of a continuous (aleatory or stochastic) variable with a real value and an expectation (mean or media, median, mode, variance and typical deviation) in statistics. Other distributions include that of Siméon Poisson, the binomial of Jacob Bernoulli, and that of the Baron of Rayleigh.

Laplace introduced a theorem, first proven by Thomas Bayes, that provided the probabilistic limits of an event. Applied to statistical inference, LinkBayesian inference relates the posterior probability of a hypothesis H conditional (| or contingent) to the observation of event data D as evidence to the product (·) of the prior probability of a hypothesis and probability (P for a probability density function for continuous variables or a probability mass function for discrete variables) of an event as function of the evidence conditional to the hypothesis, normalised by the probability of the marginal model evidence. This can be written as:

    P(H | D) = P(D | H) · P(H) / P(D)

where the contingencies, with ¬ for "not" or the negation, ⋃ (∨) for "or" or the union (disjunction) and ⋂ (∧) for "and" or the intersection (conjunction), are:

  • P(D) = P(D | H) · P(H) + P(D | ¬H) · P(¬H) = P((DH) ⋃ (D ⋂ ¬H)) = P(H | D) · P(D) + P(¬H | D) · P(D) = P((HD) ⋃ (¬HD));

  • P(¬D) = 1 − P(D) = P(¬D | H) · P(H) + P(¬D | ¬H) · P(¬H) = P((¬DH) ⋃ (¬D ⋂ ¬H)) = P(H | ¬D) · P(¬D) + P(¬H | ¬D) · P(¬D) = P((H ⋂ ¬D) ⋃ (¬H ⋂ ¬D));

  • P(H) = P(D | H) · P(H) + P(¬D | H) · P(H) = P((DH) ⋃ (¬DH)) = P(H | D) · P(D) + P(H | ¬D) · P(¬D) = P((HD) ⋃ (H ⋂ ¬D));

  • P(¬H) = 1 − P(H) = P(D | ¬H) · P(¬H) + P(¬D | ¬H) · P(¬H) = P((D ⋂ ¬H) ⋃ (¬D ⋂ ¬H)) = P(¬H | D) · P(D) + P(¬H | ¬D) · P(¬D) = P((¬HD) ⋃ (¬H ⋂ ¬D)).

The theorem results in P(HD) = P(H | D) · P(D) = P(DH) = P(D | H) · P(H) for the joint (conjoined or bivariate) probability of dependent events (for independent events this respectively equals P(H) · P(D) and P(D) · P(H)). The predictive (previewed) prior and posterior distributions are the result of the marginalisation (the collection of the sub ensemble of probabilities of the aleatory variables without reference to the other values) of the probabilistic distribution of a possible value of an event for its observations conditional to its prior and posterior distributions (for the parameter and hyperparameter prior and posterior to the observation of an event). The theorem is extendable as a general formulation to multiple events in a sequence of independent and identically distributed (iid) observations (EEn) where a model is represented by an event (MMm). Thus, the posterior probability P(M | E) is the quotient of the quantity of the product of P(E | M) and P(M) (the prior probability, i.e. the consequence of antecedents) with the divisor as the summation of the product P(E | Mm) and P(Mm) for m models. The function of verisimilitude P(E | M) is the product (Π) of the sequence (factors) P(Ei | M) for the index of multiplication i as an element of n observed events.

In addition to statistics, Gauss was interested in differential and integral geometry, with its theory of plane and space curves. The curvature of surfaces and varieties is measurable by the angles, distances and rhythms that determine them. It was influenced by infinitesimal calculus, the mathematical study of the differential gradient fluxion and integral fluent function of a value or quantity that varies in dependence of variables. The priority strife between Isaac Newton and Gottfried Wilhelm Leibniz over the invention (conception and publication) of their ideas has been concluded to be independent of each other. The two mathematicians invented different notations, with two additional created by Euler and Joseph Louis Lagrange. Additionally, Gauss discovered geometries that were not Euclidean, such that it is the intersection of metric geometry and affine geometry to include hyperbolic and elliptic geometries. This would permit Einstein's theory of general relativity that united in description gravity (gravitation) as a property of four dimensions (space and time). It related the curvature of spacetime to the energy and momentum of present matter and radiation. Maxwell's equations are compatible with Einstein's special and general theories of relativity. Einstein, without possession of mathematical ability, respected mathematics for its power and beauty.

The human mind processes the displacement (motion, which manifests as change in the directions or dimensions of space with respect to time), affine transformations (translation, reflection, dilatation, contraction, rotation, and transvection), and perspective (projection) observed in the visual field (sensory vision and object recognition by the collection and transduction of a signal). In descriptive graphical representation, the rectilinear rays of projection of an object in three-dimensional space are parallel, intersect orthogonal or oblique with the two-dimensional picture or plane of image. In perspective, parallel lines appear to converge at a point of fugue or flight (if the parallel lines are orthogonal to the plane of image, the point corresponds to the oculus, the location or station of the ocular observer). The intersection (i.e., not a void ensemble) of geometric objects occurs at a point (of two lines, or a line and a plane) or a common ensemble of points (a line where two planes, or a line and a plane, intersect) in space. Algebra, with its algorithmic foundations and regulations, extended arithmetic (and its binary operations, varying in the properties of association, commutation, and distribution) with the implementation of abstract structures (e.g., variables, functions, matrices, and vectors). These vectors, or geometric quantities with magnitude (module or absolute value norm as a scalar with a unit) and direction (orientation and sense in reference to the referential basis and order), have a course in space and momentum in motion. The position of these vectors is defined by its coordinate system. They can be normalised to unit vectors whose linear combination (with the coordinates as coefficients) can be written as each vector in space, if its basis is formed by a linearly independent system of these unit vectors as elements that generate the vector space (whose dimension is the cardinality of the basis). In the canonical basis, the unit vectors are mutually orthogonal (perpendicular, or normal to the tangent plane of a surface). A vector is an eigenvector ("own, proper, self") if a linear transformation (operator or application) is a scalar (called an eigenvalue) multiple of that vector. In finite-dimensional vector space, the linear transformation, which does not mutate the orientation of the vector, can be expressed as a matrix. With differentiation (continuous and instantaneous variation) and integration (summation of definite and infinite series of quantities), infinitesimals (functional limits) would progress this further (e.g., convolution and correlation).

The extension of the differentiation and integration calculus of one variable to functions with multiple (independent) variables permits the study of dynamics of systems with multiple degrees of freedom. The domain of one-dimensional curves (with longitude) and two-dimensional surfaces (with area) corresponds to n-dimensional Euclidean space (real coordinate space of dimension n as the codomain). A scalar field of n-dimensions corresponds to one-dimensional space of numbers, values or quantities. The application of Lagrange multipliers (of Joseph-Louis Lagrange) to locate maxima and minima (plural of maximum or minimum) of a function is a method of optimisation that subjects the function to equality constraints (conditions). The formulation of the gradient of the objective function f(x, y) and the gradients of the equality constraints function g(x, y) = 0 results in the Lagrangian function of stationary points

    L(x, y, λ) = f(x, y) − λg(x, y)

for the variables x and y of n-dimensions and the Lagrange multiplier λ. The calculation of the gradient of L(x, y, λ) and its optimisation (where it equals zero, without explicit parameterisation in terms of the constraints) results in critical points at the optimum (local and global optima) and saddle points. The gradient (∇ of a function as its partial derivatives with respect to its variables or directions, as symbolised with the ∂ that is analogous to d of the total derivative), divergence and rotation operations in vector calculus are applied to vector fields with a domain of n-dimensions and a codomain of m-dimensions. Suppose a vector-valued function f, such that each of its first-order partial derivatives exist in the n-dimensional space, accepts an argument x that is an element (member) of that space to produce f(x) in m-dimensional space. The matrix J (the Jacobian, named after Carl Gustav Jacob Jacobi) of f is defined to be an m×n matrix (number of elements in a column by number of elements in a file) whose (i, j)th element entry is the ith partial derivative of f (element in a file) with respect to the jth variable x (element in a column). The matrix, a file of column vectors, represents the differential of f at every point x where f is differentiable. For a column matrix of a displacement vector (yx), the optimal linear approximation of f(y) is

    f(x) + J(x) ⋅ (yx)

(the product of matrix multiplication of a m×n matrix and a n×1 matrix is a m×1 matrix). If m = n, then the Jacobian matrix is a square matrix so its determinant can be known. The determinant encodes properties of the linear transformation described by the matrix, i.e. the n-dimensional volume scale factor. In a transformation with a continuous bijective correspondence, the determinant is the factor applied to the differential for the change of variables of coordinate systems in an integral. The polymath John von Neumann (né János) contributed to linear programming (optimisation of a linear objective function, subject to linear equality and linear inequality restrictions), game theory, cellular automata, computer architecture and quantum mechanics.


In electromagnetism (electrodynamics), Gauss discovered two of the four partial differential equations (with integral forms in vector calculus by Oliver Heaviside, whose function is related to the unit impulse of Paul Dirac that is important in signal processing) published by Maxwell. The first theorem describes the static electric field and the electric charges that cause it, whereby a static electric field is directed from positive charges and to negative charges. The net flux (divergence) of the electric field through any closed surface (opposite of one that is open, overt or apert) is proportional to the local density of charge of the surface, irrespective of its distribution. This can be derived from the inverse square law of Charles-Augustin de Coulomb that quantifies the magnitude of the electrostatic force between two electric charges. The force, by some constant, is directly proportional to the product of the magnitudes of charges and inversely proportional to the square of the distance that separates their centres. This is analogous to Newton's law of universal gravitation for point particles of mass. The second states the magnetic field in materials is caused by dipole configuration (e.g., a circuit ring with current). That is, the its divergence (the total flux) through a surface is equal to zero. The third describes induction (i.e., a magnetic field that varies in time induces an electric field that varies in space, and vice-versa) discovered by Michael Faraday.

The synchronous and induction (or asynchronous) motors of Nikola Tesla contributed to the modern polyphase system of electrical energy with its alternative current. Inductance was independently discovered by Joseph Henry. His work would be practical for telegraphy (e.g., Charles Wheatstone, Ernst Werner von Siemens, and the first electromagnetic telegraph invented Gauss and Wilhelm Weber and funded by Alexander von Humboldt). The work per unit charge necessary for the motion of a charge around a closed loop equals the rate of change of the magnetic flux contained by the surface. Its notation may use a rotational vector operation to describe the infinitesimal circulation of a field. Heaviside used the term reluctance to describe the magnetic resistance of magnetomotive force (equivalent to the product of the number of complete revolutions, rotations, circles, throws or wends, which is the index of envelopment of a curve with chiral orientation, and the current in the spiral) divided by magnetic flux (its inverse is permeance as the permeation of magnetic flux, which is the analogue to electrical conductance or the inverse of electrical resistance in electrical circuits). Permeability of magnetic circuits is thus analogous to electrical conductivity, magnetomotive force with electromotive force, magnetic field with electric field, magnetic flux density with current density, and magnetic flux with electric current.

The fourth (an addition to that of André-Marie Ampère) states a magnetic field can be generated (induced around a closed loop) in proportion by an electric current and a changing electric field (displacement current). In derivation of the electromagnetic wave equation, electromagnetic radiation and optic illumination were unified. As corollaries to Maxwell's equations, the circuit laws of Gustav Kirchhoff (based on the work of Alessandro Volta and Georg Ohm) for a concentration of pieces (components, parameters or elements) described current and potential difference. By the conservation of charge, the sum of the flow currents (a signed positive or negative quantity that reflects direction) at a connection (node, junction or point) is zero. At low frequencies, the sum of the potential differences around any closed ring in a state space (reduced to a finite dimension, such that the partial differential equations of the continuous, infinite-dimensional time and space model of the physical system are ordinary differential equations) is zero. Heinrich Hertz first proved the existence and propagation of the electromagnetic waves that Maxwell predicted in his equations of electromagnetism. In SI, the unit of electric current A (Ampère) is defined by elementary charge (of a positive proton, or the negative of an electron) of 1.602176634×10−19 C (Coulomb) per second. The unit of electric potential difference V (Volt) is defined as one J (Joule) of electric potential energy per C of electric charge, where a J is the thermal energy dissipated when one A of current passes through a resistance of one Ω (Ohm, equivalent to the quotient of V and A because potential difference is directly proportional to the product of current and resistance, or the inverse of one S or Siemens of conductance) for one second. The unit of electrical power W (Watt) is equal to the product of V and A. The unit of capacitance F (Faraday) is equal to the quotient of C and V. The unit of inductance H (Henry) is the quotient of Wb (Weber) and A, where Wb is the unit of magnetic flux of the product of V and s or the product of density T (Tesla) and m2 (area).


In physics, the quantisation of light by Max Planck resulted in Albert Einstein interpreted the quanta to be photons (particles of an electromagnetic field). He proposed that the energy of a photon is proportional to its frequency, which indicated a wave–particle duality. Energy and momentum are analogously related as temporal frequency and spatial frequency are in special relativity (its spacetime was developed by Hermann Minkowski and its transformations were derived by Hendrik Lorentz, who arrived at the electromagnetic force implied by Maxwell's equations). Louis de Broglie postulated material particles with mass (e.g., an electron) possess wave properties. The quantised orbits correspond to discrete energy levels of the atomic model of Niels Bohr, which improved upon that of Ernest Rutherford. The resemblance of mechanics and optics became stronger. The principle of Pierre de Fermat connects geometric (ray) optics with physical (wave) optics as an analogy of the principle of minimal action. The propagation consequences of this principle is that the proportion of the sines of the angles of incidence and refraction is equivalent to that of the velocities of phase (or wavelengths) in two isotropic media, and that the angle of incidence equals the angle of reflection at the interface or surface.

Lenses are typically spherical such that the curvature of the two optical surfaces are spheres (convex, concave or planar) with a central axis. A biconcave or plano-concave lens diverges a collimated light (i.e., is negative). A biconvex or plano-convex lens converges a collimated light (i.e., is positive) to a focal point or focus. The Linkequation for reciprocal of the focal length f

    1 / f = (n − 1)(1 / R1 − 1 / R2 + d(n − 1) / (nR1R2))

for the refractive index n (which is equal to c / v, or the celerity of light divided by the velocity of phase) of the lens material, the radius of curvature of the lens surface R1 in the vicinity to the light, the radius of curvature of the lens surface R2 not in the vicinity to the light, and d is the breadth of the lens (the distance along the lens axis between the two surface vertices). Convex surfaces are indicated by R1 > 0 and R2 < 0. Concave surfaces are indicated by R1 < 0 and R2 > 0. Lens are either convergent (f > 0) or divergent (f < 0). Magnification is equal to −di / do for the distance of the object to the lens do and the distance of the image to the lens di. This is equivalent to the proportion of the height of the image and the height of the object. The angular magnification of a telescope is the relation of the focal length (of the objective lens in a refractor or of the primary mirror in a reflector) and focal length of the ocular lens. In a microscope it is the relation of the focal length of the objective and the distance between the focal planes of objective and ocular lenses. For negligible d, the dioptric potency 1 / f is equal to 1 / do + 1 / di. This is the Gaussian lens formula. The composite focal point of lenses in contact is the sum of the dioptric potency. In optometry, a corrective lens (for oculus dexter or "right eye" and oculus sinister or "left eye" in the perspective of the person) is prescribed, constructed and dispensed with a spherical correction in dioptric potency (positive for convergent and negative for divergent lenses). Dissimilar to a cylindrical lens for the cylindrical correction of an astigmatism (deviation) that focuses light into a line not a point, a spherical lens has equal (uniform) curvature and dioptric potency in all directions (meridians) perpendicular to the optical axis. There are two notations (conventions) for a prescription where the cylindrical correction is either plus cylinder (more convergent) or minus cylinder (more divergent) relative to the spherical correction (sphere). To convert (calculate a conversion) of the spherical correction, add the sphere and cylinder numbers (values). Then invert the axis value and add 90° (subtract 180° from the result if it exceeds 180°). An axis of 90° is vertical, whilst 0° or 180° are horizontal. A positive cylindrical dioptric potency is most convergent 90° from the axis, whilst a negative one is most divergent 90° from the axis. If it is zero, the lens is spherical. In photography, the reciprocal of the relative aperture is pupil diameter of the effective aperture d divided by the focal length f. The illuminance of the projected image relative to the luminance in the field of view (vision) reduces with the quadrate square of this focal number N. The profundity of the camp (Linkdepth of field) of an objective lens for acceptable focus is approximately proportional to the number (for a circle of confusion of an image, or its conjugate scaled by magnification, a focal length and a distance to an object in the focal plane). Lenses (physical or geometric) are a perspective transformation from real object space to virtual image space. They are subject as surfaces to radiant and luminous exposure from the energetic and photic flux of irradiance and the illuminance. Luminance (luminous intensity, which is analogous to radiant intensity) is the luminous flux per steradian (the three-dimensional analogue to the two-dimensional radian), which is different than the radiant flux (the radiant power or energy per unit of time) of the total electromagnetic radiation (not solely the visible spectrum).

In lenses, the focal distance to the focus where axial light from infinity converges (or appears to, if it diverges) depends on the concave or convex radial curvature of the surface of the refractive material. The formation of real or virtual images depends the curvature of the lens and the location of the object relative to the focal distance. Real images are formed by the convergence (as opposed to the extension of the divergence for virtual images) of light rays that can project at a real location. In vision, the eye (composed of the cornea, sclera, iris, pupil, ciliary muscle, sphincter, etc.) functions as a lens, receiving light (illumination through an aperture in dilation) as visual stimulus from the from the external world of observation. It projects a scale replica of this visual field onto the retina at the rear of the eye. At the retina the transduction of the visual signals to neural signals occurs. These ocular sensory data (information) connect by the optic nerve fibres to neural circuits and cerebral structures for filtering and processing. The detection of objects and motion is by the apparent contrast generated, or the difference of light luminance or colour (wavelength, or the inverse frequency, that is represented with the symbol λ or lambda, which indicates the not related eigenvalue). The single-lens reflex (reflection of light by a mirror at an 45 degree angle, with a projection to a pentaprism for internal reflection to be viewed as an appearance in the ocular lens) camera is popular in photography. The photographic camera functions control light sensitivity of the film or sensory matrix (transducers), obturator velocity (duration of exposure to the light projected by objective lens), and aperture (the diaphragm of the objective with a diameter in terms of focal distance that controls exposure to light and profundity of field). The transducer to the image medium is an analogue transparent plastic substrate as a latent photochemical image in a colloid suspension, or a digital photoreceptor of metal–oxide–semiconductor (MOS) capacitors or transistors that represents the image as pixels in the (magnetic disc or band, or electronic solid-state) memory of the integrated circuit. These sensory detectors replaced and substituted the cathode ray tubes (a dissector that focuses the light or photons of a scene onto the photocathode that emits electrons in the photoelectric effect where the magnitude of the electric current at the anode is proportional to the luminance of the image) in videographic cameras. A charge-coupled dispositive transfers photogenerated electric charge between capacitors that represent pixels. In a similar Linkconversion of radiation (from the detection of a photon to the generation of an electron in a current with a photodiode), a transistor of an active (i.e., with amplification and not passive) pixel image sensory detector converts electric charge per pixel. Both technologies convert the charge to a potential difference of the junction.

Diffraction is a phenomena occurring when the propagation of waves encounters the geometrical shadow or umbra of an obstacle or aperture. Interference from superposition results in maxima and minima. The Huygens–Fresnel principle states that each point on a primary wave is a source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The diffraction equation of Kirchhoff is derived from the wave equation, which is proportional to the product of the Laplacian or Laplace operator ∇2 of the displacement u(x1, x1, …, xn; t, where the nabla ∇ is the vector of the partial derivatives of the n-dimensional coordinates with the canonical or natural basis or unit vectors. The Laplacian ∇2 is the the scalar product ∇ · ∇, or the divergence of the gradient of a function. The vector product (notated with ×) of the ∇ with a vector field is the rotational vector operation (rotation). The approximation of the diffraction equation in the far-field region (With a r as the distance from radiation, from two wavelengths or 2λ to infinity or ∞) is referred to as Fraunhofer diffraction (named for Joseph von Fraunhofer), whilst in the near-field region (within one wavelength or λ, where the frontier of the reactive region is r = λ / 2π and that of the radiative region is r = λ) it is referred to as Fresnel diffraction. In diffraction and antennae, the distinction is defined by the distance 2D2 / λ for diameter D. In optics, Augustin-Jean Fresnel, Christiaan Huygens, Thomas Young and Robert Hooke are credited for advancing the wave theory of light (in contrast to the particle theory of Newton) that was subsumed in Maxwell's electromagnetic equations.


Maxwell with Josiah Willard Gibbs and Ludwig Boltzmann invented statistical mechanics. Gibbs (who independently invented vector calculus) proposed thermodynamics (founded by Sadi Carnot, who is named after the Persian poet Saadi of Shiraz; James Watt, James Joule, and the Baron of Kelvin are important in this discipline, amongst others) as the consequence of the statistical properties of ensembles of the possible macrostates of a physical system composed as a collection of a multitude of particles assigned with probabilities. In his elucidation of irreversibility physical processes in probabilistic terms, he generalised the interpretation of entropy (introduced by Rudolf Clausius) for an arbitrary ensemble with all possible microstates and their correspondence of probabilities, which would influence the information theory of Claude Shannon. In emulation of the form

    S = kB ln Ω

by Boltzmann and Gibbs as the expression for thermodynamic entropy S, Ω (Omega) the number of equally probable microstates that correspond to the thermodynamic macrostate, and constant kB (named for Boltzmann and equal to 1.380649×10−23 J⋅K−1, for a unit K or Kelvin for thermodynamic temperature) in a statistical micro-canonical ensemble (system), Shannon formulated information entropy H as the summation from i = 1 to i = n of

    pi logb(1 / pi)

for the probability pi of the ith element in the message space with a cardinality of n for a logarithmic base b (equal to 2 if the unity of entropy is in bits, or e for the natural logarithm ln). The probability distribution pi is equal to 1 / n when any element in the message space is of equal probability. The information entropy is the expected value of information content (a value that is probable to occur as an event it contains). It is per symbol of communication and is inversely proportional to its frequency or certainty of occurrence. The communication of a message can be modelled by its transmission space where it is encoded and transmitted as signal sequence by the transmitter for the channel. The introduction of a disturbance is represented as a conditional probability. The signal of the message is received and decoded in the reception space where it is estimated by the receiver for interpretation. The joint entropy H(X, Y) is determined by the joint probability of two discrete aleatory variables X and Y with possible values n and m and expected values E(Y) and E(Y). For equivocation, or conditional entropy, entropy of Y conditioned on X H(Y|X) is equivalent to H(X, Y) − H(Y). Transinformation, or mutual information I, measures the reduction of uncertainty of one variable (a signal) in the observation of another. It is symmetrical in properties where:

    I(X, Y) = I(Y, X) =H(X) − H(X|Y) = H(Y) − H(Y|X) = H(X) + H(Y) − H(X, Y).

Signals are mathematical functions with a continuous or discrete dependent value (real, imaginary or complex with magnitude and phase) and a continuous or discrete independent variable (temporal, spatial or dimensional). These variable coordinates of time or space are discrete if they are integers and are continuous if they are real numbers. Discretisation in the amplitude (magnitude) of the signal is referred to as quantisation. Discretised digital indices of the variables of the analogue signal are samples. According to the Nyquist–Shannon sampling theorem (named for Claude Shannon, who Linkencountered with Alan Turing, and Harry Nyquist, who worked with Hendrik Bode), the sampling frequency must be greater than twice the maximum frequency of the original signal (20 kHz for human audition) to be reproduced (reconstructed), which is restricted in spectral bandwidth with the passage of an anti-aliasing filter that attenuates frequencies greater than the sampling frequency. This filter is typically a low-pass filter, whose ideal is a rectangular function in the frequency domain that is linear, is time-invariant, and attenuates to zero amplitude (from one) at half the sampling frequency, or π radians rotational (angular or circular) frequency ω (omega) (2πf, for temporal frequency f in Hz or Hertz and reciprocal seconds, which is equal to one cycle per second or s as defined by 9192631770 cycles of the hyperfine structure transition frequency of caesium-133 atoms, or stable isotopes with 55 protons and 78 neutrons in an atomic nucleus). Its time domain impulse response is a cardinal sine function or sin(πt) / πt in time t. A filter is implemented as the convolution of the signal with the impulse response. A linear contraction (or expansion) in the time domain corresponds in a duality to a linear expansion (or contraction) in the frequency domain. The sampling operation is mathematically equivalent to the multiplication of the signal with the sampling function, which is a serial pecten (a "comb" or "train") or periodic sum (where a period is the inverse of frequency) of an infinite sequence of Dirac impulses.

LinkData compression (reduction by a code of codification) is a discipline of computation in the information technology sector. The evaluation of its efficacy and efficiency are defined in terms of the emphasis of the practise of the practician and the theory of the theoretician: the practical compression time and the theoretical compression tax. This rate, ratio, division or relation that corresponds to the complexity of the signal or ensemble of data and media, and algorithm or implementation is the metric or measurement of the relative reduction in quantity (the "tally" or "taille", as the magnitude, dimension and proportion) of the data representation produced. Modes of registration and presentation of information include text, pictures, forms, artefacts, images, literal and graphic documents, visual video and aural audio in the registers of the medium of memory. The two classes of data compression algorithms are reversible and irreversible (with and without perdition, distortion and degradation). The primary computes a statistical model of the data and then transforms data such that "probable" (encountered with a frequency of occurrence) data is assigned shorter bit sequences or chains than "improbable" data using entropy coding (encoders and decoders, e.g. arithmetic coders process, and the process of David Huffman). The optimal code length for a symbol in the method of Shannon is –logb(Pi), where b is the number of symbols and Pi is the probability of the symbol i. For b = 2, there are 2n potential levels (amplitudes, phases or frequencies) are representative of symbolic signals communicated (systemic information transferred) with n bits (binary digits) per pulse or symbol in a temporal unit interval or duration of seconds. The method of Robert Fano orders the symbols by the order of probability and divides two (binary or dyadic) ensembles with approximately equal total probabilities with a successive determination, distribution and allocation of digital codes. The method of Huffman uses this data structure as a arboreal boom that inverts the direction of the division from the radical root to the foliate leaves, whilst resulting in optimal prefix codes. It creates a node for each symbol in a queue where probability (frequency of occurrence) corresponds to priority. Whilst there is more than one node in the queue, the algorithmic process:

  1. Removes the two foil nodes of minor probability from the queue;

  2. Adjoins 0 and 1 as prefixes respectively to any code already assigned to these nodes;

  3. Creates a new internal node with these two nodes as progeny and with probability equal to the sum of the probabilities of two nodes;

  4. Adds the new node to the queue.

The residual node (with major probability) is the radix node. Other algorithms (i.e., Abraham Lempel and Jacob Ziv) digest a stream of data with the substitution of repeated occurrences of data (with the unit of bits) with a reference to their position in an associative table of fields (collection of attributes, names or keys in a finite domain) as a correspondent ensemble of values. The generated or constructed models of estimated or measured statistics are either static (modular) or dynamic (adaptive). The secondary approximates (inexact and imperfect, not exact or perfect) a duplication (regeneration, reproduction, reconstruction and recreation) of the original digital data (information) by a cycle of transformation (a function and conversion) of compression and expansion. Dissimilar to reversible compression, these processes result in artefacts (discernible, perceptible, distinguishable and visible effects, e.g. temporal and spatial aliases).

Quantum Mechanics

LinkQuantum mechanics (in the description of state in a physical system with a function of the superposition of vectors) combines a probabilistic (stochastic) interpretation with deterministic dynamics in evolution. The description of the possibilities of an abstract system is a representation of Nature (natural reality). This cosmic physical theory postulates that the world of local experience exists as one of multiple parallel worlds of reality. In the act of measurement (Linkregistration of experimentation and observation), the transition from the "possible" to the "actual" occurs in the interaction (connection and relation) between the object and the subject. It represents a collapse or reduction of the function to an eigenstate (i.e., an observable property or characteristic including position, momentum and energy, with the transfer of the latter associated with causality in spacetime). For example, a photon of light affects the properties of the phenomenon (e.g., an electron) and the value of the quantity measured (observed and experienced). The measurement of the location (certain position) of light (photons) or current (electrons), which propagates as a wave with amplitude, interference and diffraction, collapses its undulation (und, wave, waw or billow) function of degrees of freedom (that describe the states of vibration in the quantum system) and exhibits comportment similar to a particle. The experiment of Albert Michelson and Edward Morley determined no evidence for the existence of the luminiferous aether, the supposed medium for light. This result initiated research in special relativity. Enrico Fermi was the first to realise that the mass–energy equivalence possessed consequences of energetic radiation from the radioactivity of nuclear fission. Einstein formulated this in his theory of special relativity for energy as E, mass m and the celerity of light c:

    E = mc2.

From this relativistic physics, Erwin Schrödinger published his diffusion equation for the probability amplitude to describe the state function of a quantum-mechanical system. Werner Heisenberg introduced his alternative and equivalent formulation of quantum mechanics with matrix mechanics. He would develop a principle that asserts the fundamental limit to the certainty with which the values for the complementary physical quantities of position and momentum of a particle in motion. Richard Feynman also introduced his path integral formulation where there are an infinity of possible trajectories of action. A graphical diagram (a method presented by Feynman and Dyson) represents the contribution of perturbations to the transition amplitude probabilities for a quantum system from the initial to the final state. Wolfgang Pauli formulated a quantum mechanical principle that conditions, for two or more identical (indistinguishable or indiscernible) particles with a half-integer gyration as an intrinsic form of angular momentum, it is impossible for them to occupy the same state in a quantum system simultaneously. This exclusions extends to leptons (elementary particles such as electrons and neutrinos) and baryons (composite particles that are a type of hadron such as protons and neutrons). A photon, which possesses zero mass, is not included because it mediates force and interactions, not generations. Particles such as these posses an integer gyration and the property of a symmetric wave function. Two electrons in the same atomic orbital have equal values for their quantum numbers, principal quantum number, azimuthal quantum number and magnetic quantum number. They do not have an equal quantum number that indicates the gyration and its orientation as a vector. Their charge is the third degree of freedom in their state. The total wave function for multiple these particles is antisymmetric. Elementary particles are fundamental and material constituents. Each particle associates with an antiparticle with equal mass and opposite charge. The superposition principle (i.e., where a linear combination of solutions to a linear equation is a solution of it) is applicable to the vectors of quantum states. The configurations of particles in a general state of a system are specified by complex numbers (a phase vector or complex amplitude) as coefficients. This is analogous to probability distribution in statistics where for the probabilities of mutually exclusive events total (sum) to unity (the probability of their union or disjunction). Max Born described the absolute value (modulus or magnitude), where

    r = |z| = √(x2 + y2

and the tangent of phase is

    tan φ = sin φ / cos φ = y / x,

is the product of it

    z = x + i y = r e

and its complex conjugate

    z* = xi y = r e

(with a notation not to be confused with a conjugate transpose of a matrix with real and imaginary numbers as complex elements from m×n to n×m), or the squared (quadrate) of the probability amplitude is the probability (continuous density, in contrast to discrete mass) that physical particles are in a spatial configuration, position or situation at a temporal instant. For electrons, superposition manifests as the physical interference phenomenon of amplitude in the double-slit (fissure) experiment of Young.


A spiral or helical spool (roll or volute) of insulated or isolated conductive filament is wound (involved or enrolled) around a magnetic core (kernel or nucleus) to filter high-frequencies ("noise") as a passive low-pass filter (the filtration and attenuation of electromagnetic radio-frequency interference greater than a "cut frequency" with a response that passes or permits continuous current and low-frequency alternative current). The magnet is typically ferrite (a ceramic material or iron or ferric oxide in a composite with metal oxides). They are ferrimagnetic, which is type of spontaneous magnetisation distinct of ferromagnet where all the magnetic moments of a material are aligned (i.e., none are in the opposite direction). Their electrical resistance (reciprocal of conductance) diminishes induced parasitic currents of planes perpendicular (in a direction that opposes, which was formulated by Heinrich Friedrich Emil Lenz and was discovered as a phenomenon by Jean Bernard Léon Foucault) to a changing flux of a magnetic field. The magnetic coercivity categorises the ability of a ferromagnetic material to not become demagnetised in the application of an external magnetic field (confer with electric coercivity as analogous for the ability of a ferroelectric material to not become depolarised in application of an external electric field). Opposite of the substrate of magnetic bands, a strate of ferrite metal particles is used for the recording of information. The aleatory (direct, as opposed to sequential) access memories of computers used ferrite toroids as transformer cores where magnetic hysteresis permitted record of a state as one bit of information (determined by the chiral direction of the magnetisation) in non-volatile memory. A transformer consists of a primary and secondary spool (each cylinder with a number or quantity of windings) that is wound around a core (toroidal ring). A varied primary current produces a magnetic flux in the permeable core that induces a varied electromotive force (potential difference) for the secondary spool. The secondary current produced creates a magnetic flux equal and opposite to that produced by the primary current. The symmetry of a toroid reduces the perdition of flux and posses a greater inductance that a solenoid. Dissimilar to inductors (reactors), a ferrite filter convert radio-frequency energy to the dissipation of heat. It results in a complex impedance (with the components of resistance, inductive reactance and capacitive reactance) that impedes these signals. An inductor results in an inductive reactance. A conductive cable acts as an antennae that receives interference and transmits emissions as a radiator. The balance of a line or circuit is determined by the equality or symmetry of impedances of the conductors with respect to ground or Earth. It results in the equal exposure of external magnetic fields and the induction of a common mode signal.

Nuclear Magnetic Resonance

Nuclear magnetic resonance (NMR) is used in magnetic resonance imaging (MRI) for medical and clinical diagnosis and computed tomography (CT). MRI and X-ray electromagnetic radiation (discovered by Wilhelm Röntgen) facilitate medics in directing therapy or surgery. All nucleons (neutrons or protons as particles of an atomic nucleus) have intrinsic quantum property of gyration. The gyration (angular momentum) is proportionate (∝) to a magnetic dipole moment. These align parallel or anti-parallel in the presence of a magnetic field. The particles precess (in an orientation that is either parallel or anti-parallel to the gyration) around the precessional axis (the direction of the static external magnetic field). The frequency of the precession (named for Joseph Larmor) is proportional to the external magnetic field, which exerts a rotational force on the magnetic dipole moment. Felix Bloch introduced the equations of motion for nuclear magnetisation. The magnetisation (polarisation) consists of longitudinal and transverse components. The particles in space relax (return or recuperate with a time constant of a first-order, linear time-invariant system that is the reciprocal of the relaxation dynamic) to the initial thermodynamic equilibrium state of gyration with a longitudinal magnetic relaxation (parallel to the external magnetic field). In transverse magnetic relaxation (perpendicular to the external magnetic field), the particles relax in alignment (decay to zero) and cease production of the electromagnetic signal with the radio (Lamor) frequency at an oscillation of resonance.

For liquid materials, the relaxation time constant of the longitudinal relaxation is equal to that of the transverse relaxation. For viscous liquids and solids, the longitudinal relaxation time is greater than the transverse relaxation time. The application of 90-degree pulse of a constant magnetic field results in alignment (magnetisation or polarisation). The transmitted oscillation of the transverse magnetisation induces a current in the receiver as proportional signal. The Lamor frequency is contained in an envelope of the transverse magnetisation that relaxes to zero with the final termination of the pulse. The pulse rotates the longitudinal magnetisation into the transverse plane for detection. The heterogeneity (not homogeneity) of the magnetic field in space results in different gyrations and frequencies of precession. After the 90-degree pulse, the evolution results in dephased gyrations in the transverse plane and reduction of the transverse relaxation time constant. To mitigate for this, an inversion by a 180-degree pulse inverts the longitudinal magnetisation and one component of the transverse magnetisation. If the this pulse occurs at half the time the 90-degree pulse and the "echo" received, the gyrations return to phase so the time constant of the echo formation can be measured. A image is formed from the gradient fields in space of the magnetic field. These gradients selectively excite gyrations with a band of radio frequencies that corresponds to a Lamor frequency. This selective excitation is analogous to a projection. The Fourier Transform of the signal produces a projection of the transverse magnetisation (whose phase is related to the application of gradients) through an object.

The image obtained and reconstructed from an examination or CT scan of a specimen aids in the detection of regions and margins (edges or borders) of anatomy (tissues in physiology and pathology) by division and segmentation for the classification (supervised determination) of the normality or abnormality of the extractions. Methods include support vector machines, k-means vector quantisations, and k-nearest neighbours algorithms. Tomographic reconstruction depends on the Fourier Transform for analysis and its inverse for synthesis. The inversion produces an image of the function (object) from its projection. Convolution with a kernel h is equivalent to filtration. This transformation named for Fourier, where augmentation of the spatial or temporal results in reduction of frequency in continuous and discrete domains, and where differentiation and convolution corresponds to the operation of multiplication, is equal to the dimensional finite, definite, and infinite summation ∑ or infinitesimal integration ∫ transformation of the product of a function of one-dimensional time t or two-dimensional space x and y and

    exp(–j2πft) or exp(–j2π(ux + vy)),

where exp is e and j is the imaginary unit i (polar e/2) in a variation of

    e–2πi = cos 2πii sin 2πi

for frequency f or frequencies u and v from –∞ to ∞ for each dimension. Time and space can be discretised by index of a sequence or series as discrete quantities at indices (k or n). The Radon Transform is the line integral of the function, with a ray or line L at angle θ ∈ [0°, 180°) at a right-hand chirality from the x-axis and orthogonal to the z-axis that is parameterised as

    (x(z), y(z)) = ((r cos θz sin θ), (r sin θ + z cos θ))

for an arc length z and a distance from the origin r. This is the result of the rotation matrix, a transformation for a vector as a column vector. All points on L satisfy the equation

    r = x cos θ + y sin θ.

The projection is equivalent to the double integral (from –∞ to ∞ for dx and dy) of the product of the function f(x, y) and the Dirac delta function

    δ(x sin θ + y cos θ − R)

such that δ(L(R, θ)) is zero except on line L. The magnitude or norm of the vector function

    x(z) i + y(z) j

(where unit vectors i and j are normalised and orthogonal) is

    √((dx/dz)^2 + (dy/dz)^2)

(in this case it is equal to unity) for dz. A linear transformation that preserves area, volume or n-dimensional contents (hypervolume of hyperspace in the multiplicity of Euclidean space with hypersurfaces and hyperplanes, which are one less dimension of its ambient space) is absent of distortion. If preservation occurs, the determinant of derivative is equal to one, otherwise it is the scale factor. In one dimension, for a function f(x) where u = g(x), the integral of the product of f(g(x)) and dg(x)/dx for an interval [a, b] with the differential dx is equal to the integral of f(u) for the interval [c, d] where g(c) = a and g(d) = b with the differentials du. For two dimensions, a function f(x, y) and the linear transformations

    x = g(u, v)


    y = h(u, v),

its integral over the region R with the differential dA of dx and dy is equal to the integral over surface S of the product of f(g(u, v), h(u, v)) and the Jacobian (partial derivatives of x and y with respect to u and v) with the differential dV. Consider how the Jacobian of the linear transformation to polar coordinates

    x = r cos θ


    y = r sin θ

is equal to r so

    dA = dx dy = r dr dθ.

A central theorem states that the one-dimensional Fourier Transform of the projection of a two-dimensional function f(x, y) to a line by the Radon Transform is equal to the a section of the two-dimensional Fourier Transform of that function that is parallel to the projection line.