Its primary function is to provide a comprehensive list of specialized terms or acronyms used within a website or document. Each entry typically includes a term followed by its definition or explanation. These are usually organized alphabetically for easy navigation.
It serves as a valuable reference tool, helping users understand complex jargon or unfamiliar concepts. Often linked from main navigation or content pages, GLOSSARY.HTM enhances user comprehension and improves accessibility to information.
<HTML>
<HEAD>
<TITLE>Ritter's Crypto Glossary and Dictionary of Technical Cryptography</TITLE>
<META NAME = "DESCRIPTION"
CONTENT = "Hyperlinked definitions and discussions of many
cryptographic, mathematic, logic, statistics, and electronics terms
used in cipher construction and analysis.
A Ciphers By Ritter page.">
<META NAME = "KEYWORDS"
CONTENT = "cipher,crypto,cryptography,cryptographic,cryptology,
definition,dictionary,encryption,electronics,explain,explained,
explanation,glossary,information,learning,mathematics,statistics,
understanding">
</HEAD>
<BODY>
<H1 ALIGN = CENTER>Ritter's Crypto Glossary <I>and</I> <BR>
Dictionary of Technical Cryptography</H1>
<P><H2 ALIGN = CENTER>Technical Cryptographic Terms Explained</H2>
<BLOCKQUOTE><BIG><I>
Hyperlinked definitions and discussions of many cryptographic,
mathematic, logic, statistics, and electronics terms used in
cipher construction and analysis.</I></BIG>
</BLOCKQUOTE>
<H2 ALIGN = CENTER>A <A HREF="http://www.io.com/~ritter/"><I>Ciphers By Ritter</I></A> Page</H2>
<BR><H2 ALIGN = CENTER>Terry Ritter</H2>
<H2 ALIGN = CENTER>Current Edition: 1999 Jan 19</H2>
For a basic introduction to cryptography, see
<A HREF = "http://www.io.com/~ritter/LEARNING.HTM">Learning About Cryptography</A>.
Please feel free to send comments and suggestions for improvement to:
<A HREF = "mailto:
You may wish to help support this work by patronizing
<A TARGET = "Bshop"
HREF = "http://www.io.com/~ritter/BOOKSHOP.HTM">Ritter's Crypto Bookshop</A>.
<P><HR><H2>Contents</H2>
<DL>
<DT><BIG>A</BIG>
<DD>
<A HREF = "#Absolute">Absolute</A>,
<A HREF = "#AC">AC</A>,
<A HREF = "#AdditiveCombiner"><NOBR>Additive Combiner</NOBR></A>,
<A HREF = "#AdditiveRNG"><NOBR>Additive RNG</NOBR></A>,
<A HREF = "#Affine">Affine</A>,
<A HREF = "#AffineBooleanFunction"><NOBR>Affine Boolean Function</NOBR></A>,
<A HREF = "#Alphabet">Alphabet</A>,
<A HREF = "#AlternativeHypothesis"><NOBR>Alternative Hypothesis</NOBR></A>,
<A HREF = "#Amplifier">Amplifier</A>,
<A HREF = "#Amplitude">Amplitude</A>,
<A HREF = "#Analog">Analog</A>,
<A HREF = "#AND">AND</A>,
<A HREF = "#ASCII">ASCII</A>,
<A HREF = "#Associative">Associative</A>,
<A HREF = "#AsymmetricCipher"><NOBR>Asymmetric Cipher</NOBR></A>,
<A HREF = "#Attack">Attack</A>,
<A HREF = "#AugmentedRepetitions"><NOBR>Augmented Repetitions</NOBR></A>,
<A HREF = "#Authentication">Authentication</A>,
<A HREF = "#AuthenticatingBlockCipher"><NOBR>Authenticating Block Cipher</NOBR></A>,
<A HREF = "#Autokey">Autokey</A>,
<A HREF = "#Avalanche">Avalanche</A>,
<A HREF = "#AvalancheEffect"><NOBR>Avalanche Effect</NOBR></A>
<DT><BIG>B</BIG>
<DD>
<A HREF = "#BackDoor"><NOBR>Back Door</NOBR></A>,
<A HREF = "#Balance">Balance</A>,
<A HREF = "#BalancedBlockMixer"><NOBR>Balanced Block Mixer</NOBR></A>,
<A HREF = "#BalancedBlockMixing"><NOBR>Balanced Block Mixing</NOBR></A>,
<A HREF = "#BalancedCombiner"><NOBR>Balanced Combiner</NOBR></A>,
<A HREF = "#Base64">Base-64</A>,
<A HREF = "#Bel">Bel</A>,
<A HREF = "#BentFunction"><NOBR>Bent Function</NOBR></A>,
<A HREF = "#BernoulliTrials"><NOBR>Bernoulli Trials</NOBR></A>,
<A HREF = "#Bijective">Bijective</A>,
<A HREF = "#Binary">Binary</A>,
<A HREF = "#BinomialDistribution"><NOBR>Binomial Distribution</NOBR></A>,
<A HREF = "#BirthdayAttack"><NOBR>Birthday Attack</NOBR></A>,
<A HREF = "#BirthdayParadox"><NOBR>Birthday Paradox</NOBR></A>,
<A HREF = "#Bit">Bit</A>,
<A HREF = "#Block">Block</A>,
<A HREF = "#BlockCipher"><NOBR>Block Cipher</NOBR></A>,
<A HREF = "#BlockSize"><NOBR>Block Size</NOBR></A>,
<A HREF = "#Boolean">Boolean</A>,
<A HREF = "#BooleanFunction"><NOBR>Boolean Function</NOBR></A>,
<A HREF = "#BooleanFunctionNonlinearity"><NOBR>Boolean Function Nonlinearity</NOBR></A>,
<A HREF = "#BooleanLogic"><NOBR>Boolean Logic</NOBR></A>,
<A HREF = "#BooleanMapping"><NOBR>Boolean Mapping</NOBR></A>,
<A HREF = "#Break">Break</A>,
<A HREF = "#BruteForceAttack"><NOBR>Brute Force Attack</NOBR></A>,
<A HREF = "#Bug">Bug</A>,
<A HREF = "#Byte">Byte</A>
<DT><BIG>C</BIG>
<DD>
<A HREF = "#Capacitor">Capacitor</A>,
<A HREF = "#CBC">CBC</A>,
<A HREF = "#cdf">c.d.f.</A>,
<A HREF = "#CFB">CFB</A>,
<A HREF = "#Chain">Chain</A>,
<A HREF = "#Chaos">Chaos</A>,
<A HREF = "#ChiSquare">Chi-Square</A>,
<A HREF = "#Cipher">Cipher</A>,
<A HREF = "#CipherTaxonomy"><NOBR>Cipher Taxonomy</NOBR></A>,
<A HREF = "#Ciphering">Ciphering</A>,
<A HREF = "#Ciphertext">Ciphertext</A>,
<A HREF = "#CiphertextExpansion"><NOBR>Ciphertext Expansion</NOBR></A>,
<A HREF = "#Ciphony">Ciphony</A>,
<A HREF = "#Circuit">Circuit</A>,
<A HREF = "#Clock">Clock</A>,
<A HREF = "#Closed">Closed</A>,
<A HREF = "#Code">Code</A>,
<A HREF = "#Codebook">Codebook</A>,
<A HREF = "#CodebookAttack"><NOBR>Codebook Attack</NOBR></A>,
<A HREF = "#Combination">Combination</A>,
<A HREF = "#Combinatoric">Combinatoric</A>,
<A HREF = "#Combiner">Combiner</A>,
<A HREF = "#Commutative">Commutative</A>,
<A HREF = "#Complete">Complete</A>,
<A HREF = "#Component">Component</A>,
<A HREF = "#Computer">Computer</A>,
<A HREF = "#Conductor">Conductor</A>,
<A HREF = "#Confusion">Confusion</A>,
<A HREF = "#ConfusionSequence"><NOBR>Confusion Sequence</NOBR></A>,
<A HREF = "#Congruence">Congruence</A>,
<A HREF = "#Contextual">Contextual</A>,
<A HREF = "#ConventionalCipher"><NOBR>Conventional Cipher</NOBR></A>,
<A HREF = "#Convolution">Convolution</A>,
<A HREF = "#Correlation">Correlation</A>,
<A HREF = "#CorrelationCoefficient"><NOBR>Correlation Coefficient</NOBR></A>,
<A HREF = "#CRC">CRC</A>,
<A HREF = "#Cryptanalysis">Cryptanalysis</A>,
<A HREF = "#Cryptanalyst">Cryptanalyst</A>,
<A HREF = "#Cryptographer">Cryptographer</A>,
<A HREF = "#CryptographicMechanism"><NOBR>Cryptographic Mechanism</NOBR></A>,
<A HREF = "#Cryptography">Cryptography</A>,
<A HREF = "#CryptographyWar"><NOBR>Cryptography War</NOBR></A>,
<A HREF = "#Cryptology">Cryptology</A>,
<A HREF = "#Current">Current</A>
<DT><BIG>D</BIG>
<DD>
<A HREF = "#dB">dB</A>,
<A HREF = "#DC">DC</A>,
<A HREF = "#Debug">Debug</A>,
<A HREF = "#Decipher">Decipher</A>,
<A HREF = "#Decryption">Decryption</A>,
<A HREF = "#DeductiveReasoning"><NOBR>Deductive Reasoning</NOBR></A>,
<A HREF = "#DefinedPlaintextAttack"><NOBR>Defined Plaintext Attack</NOBR></A>,
<A HREF = "#DegreesOfFreedom"><NOBR>Degrees of Freedom</NOBR></A>,
<A HREF = "#DES">DES</A>,
<A HREF = "#Decibel">Decibel</A>,
<A HREF = "#Decimal">Decimal</A>,
<A HREF = "#DesignStrength"><NOBR>Design Strength</NOBR></A>,
<A HREF = "#Deterministic">Deterministic</A>,
<A HREF = "#DictionaryAttack"><NOBR>Dictionary Attack</NOBR></A>,
<A HREF = "#DifferentialCryptanalysis"><NOBR>Differential Cryptanalysis</NOBR></A>,
<A HREF = "#Diffusion">Diffusion</A>,
<A HREF = "#Digital">Digital</A>,
<A HREF = "#Diode">Diode</A>,
<A HREF = "#Distribution">Distribution</A>,
<A HREF = "#Distributive">Distributive</A>,
<A HREF = "#DivideAndConquer"><NOBR>Divide and Conquer</NOBR></A>,
<A HREF = "#Domain">Domain</A>,
<A HREF = "#Dyadic">Dyadic</A>,
<A HREF = "#DynamicKeying"><NOBR>Dynamic Keying</NOBR></A>,
<A HREF = "#DynamicSubstitutionCombiner"><NOBR>Dynamic Substitution Combiner</NOBR></A>,
<A HREF = "#DynamicTransposition"><NOBR>Dynamic Transposition</NOBR></A>
<DT><BIG>E</BIG>
<DD>
<A HREF = "#ECB">ECB</A>,
<A HREF = "#ElectricField"><NOBR>Electric Field</NOBR></A>,
<A HREF = "#ElectromagneticField"><NOBR>Electromagnetic Field</NOBR></A>,
<A HREF = "#Electronic">Electronic</A>,
<A HREF = "#Encipher">Encipher</A>,
<A HREF = "#Encryption">Encryption</A>,
<A HREF = "#Entropy">Entropy</A>,
<A HREF = "#Ergodic">Ergodic</A>,
<A HREF = "#Extractor">Extractor</A>,
<A HREF = "#ExclusiveOR">Exclusive-OR</A>
<DT><BIG>F</BIG>
<DD>
<A HREF = "#Factorial">Factorial</A>,
<A HREF = "#Fallacy">Fallacy</A>,
<A HREF = "#FastWalshTransform"><NOBR>Fast Walsh Transform</NOBR></A>,
<A HREF = "#FCSR">FCSR</A>,
<A HREF = "#FeistelConstruction"><NOBR>Feistel Construction</NOBR></A>,
<A HREF = "#FencedDES">Fenced DES</A>,
<A HREF = "#Fencing">Fencing</A>,
<A HREF = "#FencingLayer"><NOBR>Fencing Layer</NOBR></A>,
<A HREF = "#FFT">FFT</A>,
<A HREF = "#Field">Field</A>,
<A HREF = "#FiniteField"><NOBR>Finite Field</NOBR></A>,
<A HREF = "#FlipFlop"><NOBR>Flip-Flop</NOBR></A>,
<A HREF = "#FourierSeries"><NOBR>Fourier Series</NOBR></A>,
<A HREF = "#FourierTheorem"><NOBR>Fourier Theorem</NOBR></A>,
<A HREF = "#FourierTransform"><NOBR>Fourier Transform</NOBR></A>,
<A HREF = "#Frequency">Frequency</A>,
<A HREF = "#Function">Function</A>,
<A HREF = "#FWT">FWT</A>
<DT><BIG>G</BIG>
<DD>
<A HREF = "#Gain">Gain</A>,
<A HREF = "#GaloisField">Galois Field</A>,
<A HREF = "#Gate">Gate</A>,
<A HREF = "#GF2n">GF 2<SUP>n</SUP></A>,
<A HREF = "#GoodnessOfFit"><NOBR>Goodness of Fit</NOBR></A>,
<A HREF = "#Group">Group</A>
<DT><BIG>H</BIG>
<DD>
<A HREF = "#HammingDistance"><NOBR>Hamming Distance</NOBR></A>,
<A HREF = "#Hardware">Hardware</A>,
<A HREF = "#Hash">Hash</A>,
<A HREF = "#Hexadecimal">Hexadecimal (Hex)</A>,
<A HREF = "#Homophonic">Homophonic</A>,
<A HREF = "#HomophonicSubstitution"><NOBR>Homophonic Substitution</NOBR></A>
<DT><BIG>I</BIG>
<DD>
<A HREF = "#IDEA">IDEA</A>,
<A HREF = "#IdealSecrecy"><NOBR>Ideal Secrecy</NOBR></A>,
<A HREF = "#i.i.d.">i.i.d.</A>,
<A HREF = "#InductiveReasoning"><NOBR>Inductive Reasoning</NOBR></A>,
<A HREF = "#Inductor">Inductor</A>,
<A HREF = "#Injective">Injective</A>,
<A HREF = "#Insulator">Insulator</A>,
<A HREF = "#Integer">Integer</A>,
<A HREF = "#IntermediateBlock"><NOBR>Intermediate Block</NOBR></A>,
<A HREF = "#Interval">Interval</A>,
<A HREF = "#Into">Into</A>,
<A HREF = "#Inverse">Inverse</A>,
<A HREF = "#Invertible">Invertible</A>,
<A HREF = "#Involution">Involution</A>,
<A HREF = "#Irreducible">Irreducible</A>,
<A HREF = "#IV">IV</A>
<DT><BIG>J</BIG>
<DD>
<A HREF = "#Jitterizer">Jitterizer</A>
<DT><BIG>K</BIG>
<DD>
<A HREF = "#KB">KB</A>,
<A HREF = "#Kb">Kb</A>,
<A HREF = "#KerckhoffsRequirements"><NOBR>Kerckhoff's Requirements</NOBR></A>,
<A HREF = "#Key">Key</A>,
<A HREF = "#KeyDistributionProblem"><NOBR>Key Distribution Problem</NOBR></A>,
<A HREF = "#Keyspace">Keyspace</A>,
<A HREF = "#KeyedSubstitution"><NOBR>Keyed Substitution</NOBR></A>,
<A HREF = "#KnownPlaintextAttack"><NOBR>Known Plaintext Attack</NOBR></A>,
<A HREF = "#KolmogorovSmirnov"><NOBR>Kolmogorov-Smirnov</NOBR></A>
<DT><BIG>L</BIG>
<DD>
<A HREF = "#Latency">Latency</A>,
<A HREF = "#LatinSquare"><NOBR>Latin Square</NOBR></A>,
<A HREF = "#LatinSquareCombiner"><NOBR>Latin Square Combiner</NOBR></A>,
<A HREF = "#Layer">Layer</A>,
<A HREF = "#LFSR">LFSR</A>,
<A HREF = "#Linear">Linear</A>,
<A HREF = "#LinearComplexity"><NOBR>Linear Complexity</NOBR></A>,
<A HREF = "#LinearFeedbackShiftRegister"><NOBR>Linear Feedback Shift Register</NOBR></A>,
<A HREF = "#LinearLogicFunction"><NOBR>Linear Logic Function</NOBR></A>,
<A HREF = "#Logic">Logic</A>,
<A HREF = "#LogicFunction"><NOBR>Logic Function</NOBR></A>,
<A HREF = "#LSB">LSB</A>
<DT><BIG>M</BIG>
<DD>
<A HREF = "#MSequence">M-Sequence</A>,
<A HREF = "#MachineLanguage"><NOBR>Machine Language</NOBR></A>,
<A HREF = "#MagneticField"><NOBR>Magnetic Field</NOBR></A>,
<A HREF = "#ManInTheMiddleAttack"><NOBR>Man-in-the-Middle Attack</NOBR></A>,
<A HREF = "#Mapping">Mapping</A>,
<A HREF = "#MarkovProcess"><NOBR>Markov Process</NOBR></A>,
<A HREF = "#MathematicalCryptography"><NOBR>Mathematical Cryptography</NOBR></A>,
<A HREF = "#MaximalLength"><NOBR>Maximal Length</NOBR></A>,
<A HREF = "#MB">MB</A>,
<A HREF = "#Mb">Mb</A>,
<A HREF = "#Mechanism">Mechanism</A>,
<A HREF = "#MechanisticCryptography"><NOBR>Mechanistic Cryptography</NOBR></A>,
<A HREF = "#MersennePrime"><NOBR>Mersenne Prime</NOBR></A>,
<A HREF = "#MessageDigest"><NOBR>Message Digest</NOBR></A>,
<A HREF = "#MessageKey"><NOBR>Message Key</NOBR></A>,
<A HREF = "#MITM">MITM</A>,
<A HREF = "#Mixing">Mixing</A>,
<A HREF = "#MixingCipher">Mixing Cipher</A>,
<A HREF = "#Mod2"><NOBR>Mod 2</NOBR></A>,
<A HREF = "#Mod2Polynomial"><NOBR>Mod 2 Polynomial</NOBR></A>,
<A HREF = "#Mode">Mode</A>,
<A HREF = "#Modulo">Modulo</A>,
<A HREF = "#Monadic">Monadic</A>,
<A HREF = "#MonoalphabeticSubstitution"><NOBR>Monoalphabetic Substitution</NOBR></A>,
<A HREF = "#Monographic">Monographic</A>,
<A HREF = "#MultipleEncryption"><NOBR>Multiple Encryption</NOBR></A>
<DT><BIG>N</BIG>
<DD>
<A HREF = "#Nomenclator">Nominclator</A>,
<A HREF = "#Nominal">Nominal</A>,
<A HREF = "#Nonlinearity">Nonlinearity</A>,
<A HREF = "#NOT">NOT</A>,
<A HREF = "#NullHypothesis"><NOBR>Null Hypothesis</NOBR></A>
<DT><BIG>O</BIG>
<DD>
<A HREF = "#ObjectCode"><NOBR>Object Code</NOBR></A>,
<A HREF = "#Objective">Objective</A>,
<A HREF = "#Octal">Octal</A>,
<A HREF = "#Octave">Octave</A>,
<A HREF = "#OFB">OFB</A>,
<A HREF = "#OneTimePad"><NOBR>One Time Pad</NOBR></A>,
<A HREF = "#OneToOne">One-To-One</A>,
<A HREF = "#OneWayDiffusion"><NOBR>One Way Diffusion</NOBR></A>,
<A HREF = "#Onto">Onto</A>,
<A HREF = "#Opcode">Opcode</A>,
<A HREF = "#OperatingMode"><NOBR>Operating Mode</NOBR></A>,
<A HREF = "#Opponent">Opponent</A>,
<A HREF = "#OR">OR</A>,
<A HREF = "#Order">Order</A>,
<A HREF = "#Ordinal">Ordinal</A>,
<A HREF = "#Orthogonal">Orthogonal</A>,
<A HREF = "#OrthogonalLatinSquares"><NOBR>Orthogonal Latin Squares</NOBR></A>,
<A HREF = "#OTP">OTP</A>,
<A HREF = "#OverallDiffusion"><NOBR>Overall Diffusion</NOBR></A>
<DT><BIG>P</BIG>
<DD>
<A HREF = "#Padding">Padding</A>,
<A HREF = "#Password">Password</A>,
<A HREF = "#Patent">Patent</A>,
<A HREF = "#PatentInfringement"><NOBR>Patent Infringement</NOBR></A>,
<A HREF = "#PerfectSecrecy"><NOBR>Perfect Secrecy</NOBR></A>,
<A HREF = "#Permutation">Permutation</A>,
<A HREF = "#PGP">PGP</A>,
<A HREF = "#PhysicallyRandom"><NOBR>Physically Random</NOBR></A>,
<A HREF = "#PinkNoise"><NOBR>Pink Noise</NOBR></A>,
<A HREF = "#Plaintext">Plaintext</A>,
<A HREF = "#PoissonDistribution"><NOBR>Poisson Distribution</NOBR></A>,
<A HREF = "#PolyalphabeticCombiner"><NOBR>Polyalphabetic Combiner</NOBR></A>,
<A HREF = "#PolyalphabeticSubstitution"><NOBR>Polyalphabetic Substitution</NOBR></A>,
<A HREF = "#PolygramSubstitution"><NOBR>Polygram Substitution</NOBR></A>,
<A HREF = "#Polygraphic">Polygraphic</A>,
<A HREF = "#Polynomial">Polynomial</A>,
<A HREF = "#Polyphonic">Polyphonic</A>,
<A HREF = "#Population">Population</A>,
<A HREF = "#PopulationEstimation"><NOBR>Population Estimation</NOBR></A>,
<A HREF = "#Power">Power</A>,
<A HREF = "#Primitive">Primitive</A>,
<A HREF = "#PrimitivePolynomial"><NOBR>Primitive Polynomial</NOBR></A>,
<A HREF = "#Prime">Prime</A>,
<A HREF = "#PriorArt"><NOBR>Prior Art</NOBR></A>,
<A HREF = "#PRNG">PRNG</A>,
<A HREF = "#Process">Process</A>,
<A HREF = "#PseudoRandom">Pseudorandom</A>,
<A HREF = "#PublicKeyCipher"><NOBR>Public Key Cipher</NOBR></A>
<DT><BIG>R</BIG>
<DD>
<A HREF = "#Random">Random</A>,
<A HREF = "#RandomNumberGenerator"><NOBR>Random Number Generator</NOBR></A>,
<A HREF = "#RandomVariable"><NOBR>Random Variable</NOBR></A>,
<A HREF = "#Range">Range</A>,
<A HREF = "#ReallyRandom"><NOBR>Really Random</NOBR></A>,
<A HREF = "#Relay">Relay</A>,
<A HREF = "#ResearchHypothesis"><NOBR>Research Hypothesis</NOBR></A>,
<A HREF = "#Resistor">Resistor</A>,
<A HREF = "#Ring">Ring</A>,
<A HREF = "#Root">Root</A>,
<A HREF = "#RMS">RMS</A>,
<A HREF = "#RootMeanSquare">Root Mean Square</A>,
<A HREF = "#RNG">RNG</A>,
<A HREF = "#Round">Round</A>,
<A HREF = "#RSA">RSA</A>,
<A HREF = "#RunningKey"><NOBR>Running Key</NOBR></A>
<DT><BIG>S</BIG>
<DD>
<A HREF = "#Salt">Salt</A>,
<A HREF = "#Sample">Sample</A>,
<A HREF = "#S-Box">S-Box</A>,
<A HREF = "#Scalable">Scalable</A>,
<A HREF = "#Secrecy">Secrecy</A>,
<A HREF = "#SecretCode"><NOBR>Secret Code</NOBR></A>,
<A HREF = "#SecretKeyCipher"><NOBR>Secret Key Cipher</NOBR></A>,
<A HREF = "#Security">Security</A>,
<A HREF = "#SecurityThroughObscurity"><NOBR>Security Through Obscurity</NOBR></A>,
<A HREF = "#Semiconductor">Semiconductor</A>,
<A HREF = "#Semigroup">Semigroup</A>,
<A HREF = "#SessionKey"><NOBR>Session Key</NOBR></A>,
<A HREF = "#Set">Set</A>,
<A HREF = "#ShiftRegister"><NOBR>Shift Register</NOBR></A>,
<A HREF = "#Shuffle">Shuffle</A>,
<A HREF = "#SieveOfEratosthenes"><NOBR>Sieve of Eratosthenes</NOBR></A>,
<A HREF = "#Significance">Significance</A>,
<A HREF = "#SimpleSubstitution"><NOBR>Simple Substitution</NOBR></A>,
<A HREF = "#Software">Software</A>,
<A HREF = "#SourceCode"><NOBR>Source Code</NOBR></A>,
<A HREF = "#State">State</A>,
<A HREF = "#StationaryProcess"><NOBR>Stationary Process</NOBR></A>,
<A HREF = "#Statistic">Statistic</A>,
<A HREF = "#Statistics">Statistics</A>,
<A HREF = "#Steganography">Steganography</A>,
<A HREF = "#Stochastic">Stochastic</A>,
<A HREF = "#StreamCipher"><NOBR>Stream Cipher</NOBR></A>,
<A HREF = "#Strength">Strength</A>,
<A HREF = "#StrictAvalancheCriterion"><NOBR>Strict Avalanche Criterion (SAC)</NOBR></A>,
<A HREF = "#Subjective">Subjective</A>,
<A HREF = "#Substitution">Substitution</A>,
<A HREF = "#SubstitutionPermutation">Substitution-Permutation</A>,
<A HREF = "#SubstitutionTable"><NOBR>Substitution Table</NOBR></A>,
<A HREF = "#Superencryption">Superencryption</A>,
<A HREF = "#Surjective">Surjective</A>,
<A HREF = "#Switch">Switch</A>,
<A HREF = "#SwitchingFunction"><NOBR>Switching Function</NOBR></A>,
<A HREF = "#SymmetricCipher"><NOBR>Symmetric Cipher</NOBR></A>,
<A HREF = "#SymmetricGroup"><NOBR>Symmetric Group</NOBR></A>,
<A HREF = "#System">System</A>,
<A HREF = "#SystemDesign"><NOBR>System Design</NOBR></A>
<DT><BIG>T</BIG>
<DD>
<A HREF = "#TableSelectionCombiner"><NOBR>Table Selection Combiner</NOBR></A>,
<A HREF = "#TEMPEST">TEMPEST</A>,
<A HREF = "#Transformer">Transformer</A>,
<A HREF = "#Transistor">Transistor</A>,
<A HREF = "#Transposition">Transposition</A>,
<A HREF = "#TrapDoor"><NOBR>Trap Door</NOBR></A>,
<A HREF = "#TripleDES"><NOBR>Triple DES</NOBR></A>,
<A HREF = "#TrulyRandom"><NOBR>Truly Random</NOBR></A>,
<A HREF = "#Trust">Trust</A>,
<A HREF = "#TruthTable"><NOBR>Truth Table</NOBR></A>,
<A HREF = "#TypeIError"><NOBR>Type I Error</NOBR></A>,
<A HREF = "#TypeIIError"><NOBR>Type II Error</NOBR></A>
<DT><BIG>U</BIG>
<DD>
<A HREF = "#Unary">Unary</A>,
<A HREF = "#UnexpectedDistance"><NOBR>Unexpected Distance</NOBR></A>,
<A HREF = "#UnicityDistance"><NOBR>Unicity Distance</NOBR></A>,
<A HREF = "#UniformDistribution"><NOBR>Uniform Distribution</NOBR></A>
<DT><BIG>V</BIG>
<DD>
<A HREF = "#VariableSizeBlockCipher"><NOBR>Variable Size Block Cipher</NOBR></A>,
<A HREF = "#Voltage">Voltage</A>
<DT><BIG>W</BIG>
<DD>
<A HREF = "#WalshFunctions"><NOBR>Walsh Functions</NOBR></A>,
<A HREF = "#Weight">Weight</A>,
<A HREF = "#Whitening">Whitening</A>
<A HREF = "#WhiteNoise"><NOBR>White Noise</NOBR></A>
<A HREF = "#Wire">Wire</A>
<DT><BIG>X</BIG>
<DD>
<A HREF = "#XOR">XOR</A>
</DL>
<HR>
<DL>
<A NAME = "Absolute"></A>
<DT><B>Absolute</B>
<DD>In the study of
<A HREF = "#Logic">logic</A>, something observed similarly by
most observers, or something agreed upon, or which has the same
value each time measured. Something not in dispute, unarguable, and
independent of other
<A HREF = "#State">state</A>. As opposed to
<A HREF = "#Contextual">contextual</A>.
<A NAME = "AC"></A>
<P><DT><B>AC</B>
<DD>Alternating
<A HREF = "#Current">Current</A>:
Electrical power which repeatedly reverses direction of flow.
As opposed to
<A HREF = "#DC">DC</A>.
<P>Generally used for power distribution because the changing
current supports the use of
<A HREF = "#Transformer">transformers</A>. Utilities can thus
transport power at high
<A HREF = "#Voltage">voltage</A> and low
<A HREF = "#Current">current</A>, which minimize
"ohmic"
or I<SUP>2</SUP>R losses. The high voltages are then reduced
at power substations and again by pole transformers for delivery
to the consumer.
<A NAME = "AdditiveCombiner"></A>
<P><DT><B>Additive Combiner</B>
<DD>An additive
<A HREF = "#Combiner">combiner</A> uses numerical concepts similar
to addition to
<A HREF = "#Mixing">mix</A> multiple values into a single result.
<P>One example is
<A HREF = "#Byte">byte</A> addition
<A HREF = "#Modulo">modulo</A> 256, which simply adds
two byte values, each in the range 0..255, and produces the
remainder after division by 256, again a value in the byte range
of 0..255. Subtraction is also an "additive" combiner.
<P>Another example is bit-level
<A HREF = "#ExclusiveOR">exclusive-OR</A> which is addition
<A HREF = "#Mod2">mod 2</A>.
A byte-level exclusive-OR is a
<A HREF = "#Polynomial">polynomial</A> addition.
<A NAME = "AdditiveRNG"></A>
<P><DT><B>Additive RNG</B>
<DD>(Additive
<A HREF = "#RandomNumberGenerator">random number generator</A>.)
A <A HREF = "#LFSR">LFSR</A>-based
<A HREF = "#RNG">RNG</A> typically using multi-bit elements
and integer addition (instead of
<A HREF = "#XOR">XOR</A>) combining. References include:
<BLOCKQUOTE>
Knuth, D. 1981.
<I>The Art of Computer Programming,</I> Vol. 2,
<I>Seminumerical Algorithms.</I> 2nd ed. 26-31.
Addison-Wesley: Reading, Massachusetts.
</BLOCKQUOTE>
<BLOCKQUOTE>
Marsaglia, G. and L. Tsay. 1985. Matrices and the Structure
of Random Number Sequences.
<I>Linear Algebra and its Applications.</I> 67: 147-156.
</BLOCKQUOTE>
<P>Advantages include:
<UL>
<LI>A long, mathematically proven cycle length.
<LI>Especially efficient
<A HREF = "#Software">software</A> implementations.
<LI>Almost arbitrary initialization (some element must have its
least significant bit set).
<LI>A simple design which is easy to get right.
</UL>
<P>In addition, a vast multiplicity of independent cycles has the
potential of confusing even a "quantum computer," should such a
thing become possible.
<BIG><PRE>
For Degree-n Primitive, and Bit Width w
Total States: 2<SUP>nw</SUP>
Non-Init States: 2<SUP>n(w-1)</SUP>
Number of Cycles: 2<SUP>(n-1)(w-1)</SUP>
Length Each Cycle: (2<SUP>n</SUP>-1)2<SUP>(w-1)</SUP>
Period of LSB: 2<SUP>n</SUP>-1
</PRE></BIG>
<P>The binary addition of two bits with no carry input is just
XOR, so the
<A HREF = "#LSB">lsb</A> of an Additive RNG has
the usual <A HREF = "#MaximalLength">maximal length</A> period.
<P>A degree-127 Additive RNG using 127 elements of 32 bits each
has 2<SUP>4064</SUP> unique states. Of these, 2<SUP>3937</SUP>
are disallowed by initialization (the
<A HREF = "#LSB">lsb</A>'s are all "0") but this is just one
unusable state out of 2<SUP>127</SUP>. There are still
2<SUP>3906</SUP> cycles which <I>each</I> have almost 2<SUP>158</SUP>
steps. (The Cloak2
<A HREF = "#StreamCipher">stream cipher</A> uses an Additive RNG
with 9689 elements of 32 bits, and so has 2<SUP>310048</SUP> unique
states. These are mainly distributed among 2<SUP>300328</SUP>
different cycles with almost 2<SUP>9720</SUP> steps each.)
<P>Note that any LFSR, including the Additive RNG, is very weak
when used alone. But when steps are taken to hide the sequence
(such as using a
<A HREF = "#Jitterizer">jitterizer</A> and
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution
combining</A>) the result can have significant strength.
<A NAME = "Affine"></A>
<P><DT><B>Affine</B>
<DD>Generally speaking, <A HREF = "#Linear">linear</A>.
Sometimes <I>affine</I> generalizes "linearity" to expressions of
multiple independent variables, with only a single-variable
expression being called "linear."
From analytic and algebraic geometry.
<BLOCKQUOTE>
<P>Assume the flat plane defined by two arbitrary unit vectors
<B>e</B><SUB>1</SUB>, <B>e</B><SUB>2</SUB> and a common origin
<B>O</B>; this is a coordinate "frame." Assume a grid of lines
parallel to each frame vector, separated by unit lengths (a "metric"
which may differ for each vector). If the vectors happen to be
perpendicular, we have a Cartesian coordinate system, but in any
case we can locate any point on the plane by its position on the
grid.
<P>An affine transformation can change the origin, the angle between
the vectors, and unit vector lengths. Shapes in the original frame
thus become "pinched," "squashed" or "stretched" images under the
affine transformation. This same sort of thing generalizes to higher
degree expressions.
</BLOCKQUOTE>
<P>The <I>Handbook of Mathematics</I> says that if <B>e</B><SUB>1</SUB>,
<B>e</B><SUB>2</SUB>, <B>e</B><SUB>3</SUB> are linearly independent
vectors, any vector <B>a</B> can be expressed uniquely in the
form <B>a</B> = <B>a</B><SUB>1</SUB><B>e</B><SUB>1</SUB> +
<B>a</B><SUB>2</SUB><B>e</B><SUB>2</SUB> +
<B>a</B><SUB>3</SUB><B>e</B><SUB>3</SUB>
where the <B>a</B><SUB>i</SUB> are the <I>affine coordinates.</I>
(p.518)
<P><I>The VNR Concise Encyclopedia of Mathematics</I> says
"All transformations that lead to a uniquely soluble system of linear
equations are called <I>affine transformations</I>." (p.534)
<A NAME = "AffineBooleanFunction"></A>
<P><DT><B>Affine Boolean Function</B>
<DD>A
<A HREF = "#BooleanFunction">Boolean function</A>
which can be represented in the form:
<BIG><BLOCKQUOTE><TT>
a<SUB>n</SUB>x<SUB>n</SUB> + a<SUB>n-1</SUB>x<SUB>n-1</SUB>
+ ... + a<SUB>1</SUB>x<SUB>1</SUB> + a<SUB>0</SUB>
</TT></BLOCKQUOTE></BIG>
where the operations are
<A HREF = "#Mod2">mod 2</A>: addition is
<A HREF = "#ExclusiveOR">Exclusive-OR</A>, and multiplication is
<A HREF = "#AND">AND</A>.
<P>Note that all of the variables <BIG>x<SUB>i</SUB></BIG> are to
the first power only, and each coefficient <BIG>a<SUB>i</SUB></BIG>
simply enables or disables its associated variable.
The result is a single Boolean value, but the constant term
<BIG>a<SUB>0</SUB></BIG> can produce either possible output
polarity.
<P>Here are all possible 3-variable affine Boolean functions (each
of which may be inverted by complementing the constant term):
<PRE>
affine truth table
c 0 0 0 0 0 0 0 0
x0 0 1 0 1 0 1 0 1
x1 0 0 1 1 0 0 1 1
x1+x0 0 1 1 0 0 1 1 0
x2 0 0 0 0 1 1 1 1
x2+ x0 0 1 0 1 1 0 1 0
x2+x1 0 0 1 1 1 1 0 0
x2+x1+x0 0 1 1 0 1 0 0 1
</PRE>
<A NAME = "Alphabet"></A>
<P><DT><B>Alphabet</B>
<DD>The set of symbols under discussion.
<A NAME = "AlternativeHypothesis"></A>
<P><DT><B>Alternative Hypothesis</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the statement formulated so
that the logically contrary statement, the
<A HREF = "#NullHypothesis">null hypothesis</A> <I>H</I><SUB>0</SUB>
has a test
<A HREF = "#Statistic">statistic</A> with a known
<A HREF = "#Distribution">distribution</A> for the case when there
is nothing unusual to detect. Also called the
<A HREF = "#ResearchHypothesis">research hypothesis</A>
<I>H</I><SUB>1</SUB>, and logically identical to
"NOT-<I>H</I><SUB>0</SUB>" or "<I>H</I><SUB>0</SUB>
is not true."
<A NAME = "Amplifier"></A>
<P><DT><B>Amplifier</B>
<DD>a
<A HREF = "#Component">component</A> or device intended to sense a
signal and produce a larger version of that signal. In general,
any amplifying device is limited by available power,
<A HREF = "#Frequency">frequency</A>
response, and device maximums for
<A HREF = "#Voltage">voltage</A>,
<A HREF = "#Current">current</A>, and
power dissipation.
<P><A HREF = "#Transistor">Transistors</A> are
<A HREF = "#Analog">analog</A> amplifiers which are basically
<A HREF = "#Linear">linear</A> over a reasonable range and so
require
<A HREF = "#DC">DC</A> power. In contrast,
<A HREF = "#Relay">Relays</A> are classically mechanical devices
with direct metal-to-metal moving connections, and so can handle
generally higher power and
<A HREF = "#AC">AC</A> current.
<A NAME = "Amplitude"></A>
<P><DT><B>Amplitude</B>
<DD>The signal level, or height.
<A NAME = "Analog"></A>
<P><DT><B>Analog</B>
<DD>Pertaining to continuous values. As opposed to
<A HREF = "#Digital">digital</A>
or discrete quantities.
<A NAME = "AND"></A>
<P><DT><B>AND</B>
<DD>A Boolean
<A HREF = "#LogicFunction">logic function</A> which is also
<A HREF = "#Mod2">mod 2</A> multiplication.
<A NAME = "ASCII"></A>
<P><DT><B>ASCII</B>
<DD>A public
<A HREF = "#Code">code</A> for converting between
7-<A HREF = "#Bit">bit</A> values 0..127 (or 00..7f
<A HREF = "#Hexadecimal">hex</A>) and text characters.
ASCII is an acronym for American Standard Code for Information
Interchange.
<PRE>
DEC HEX CTRL CMD DEC HEX CHAR DEC HEX CHAR DEC HEX CHAR
0 00 ^@ NUL 32 20 SPC 64 40 @ 96 60 '
1 01 ^A SOH 33 21 ! 65 41 A 97 61 a
2 02 ^B STX 34 22 " 66 42 B 98 62 b
3 03 ^C ETX 35 23 # 67 43 C 99 63 c
4 04 ^D EOT 36 24 $ 68 44 D 100 64 d
5 05 ^E ENQ 37 25 % 69 45 E 101 65 e
6 06 ^F ACK 38 26 & 70 46 F 102 66 f
7 07 ^G BEL 39 27 ' 71 47 G 103 67 g
8 08 ^H BS 40 28 ( 72 48 H 104 68 h
9 09 ^I HT 41 29 ) 73 49 I 105 69 i
10 0a ^J LF 42 2a * 74 4a J 106 6a j
11 0b ^K VT 43 2b + 75 4b K 107 6b k
12 0c ^L FF 44 2c , 76 4c L 108 6c l
13 0d ^M CR 45 2d - 77 4d M 109 6d m
14 0e ^N SO 46 2e . 78 4e N 110 6e n
15 0f ^O SI 47 2f / 79 4f O 111 6f o
16 10 ^P DLE 48 30 0 80 50 P 112 70 p
17 11 ^Q DC1 49 31 1 81 51 Q 113 71 q
18 12 ^R DC2 50 32 2 82 52 R 114 72 r
19 13 ^S DC3 51 33 3 83 53 S 115 73 s
20 14 ^T DC4 52 34 4 84 54 T 116 74 t
21 15 ^U NAK 53 35 5 85 55 U 117 75 u
22 16 ^V SYN 54 36 6 86 56 V 118 76 v
23 17 ^W ETB 55 37 7 87 57 W 119 77 w
24 18 ^X CAN 56 38 8 88 58 X 120 78 x
25 19 ^Y EM 57 39 9 89 59 Y 121 79 y
26 1a ^Z SUB 58 3a : 90 5a Z 122 7a z
27 1b ^[ ESC 59 3b ; 91 5b [ 123 7b {
28 1c ^\ FS 60 3c < 92 5c \ 124 7c |
29 1d ^] GS 61 3d = 93 5d ] 125 7d }
30 1e ^^ RS 62 3e > 94 5e ^ 126 7e
31 1f ^_ US 63 3f ? 95 5f _ 127 7f DEL
</PRE>
<A NAME = "Associative"></A>
<P><DT><B>Associative</B>
<DD>A
<A HREF = "#Dyadic">dyadic</A> operation in which two sequential
operations on three arguments can first operate on either the
first two or the last two arguments, producing the same result in
either case: <NOBR>(a + b) + c = a + (b + c).</NOBR>
<P>Also see:
<A HREF = "#Commutative">commutative</A> and
<A HREF = "#Distributive">distributive</A>.
<A NAME = "AsymmetricCipher"></A>
<P><DT><B>Asymmetric Cipher</B>
<DD>A
<A HREF = "#PublicKeyCipher">public key cipher</A>.
<A NAME = "Attack"></A>
<P><DT><B>Attack</B>
<DD>General ways in which a
<A HREF = "#Cryptanalyst">cryptanalyst</A> may try to
"<A HREF = "#Break">break</A>" or penetrate the secrecy of a
<A HREF = "#Cipher">cipher</A>. These are <B>not</B> algorithms;
they are just <I>approaches</I> as a starting place for constructing
specific algorithms.
<P>Classically, attacks were neither named nor classified; there
was just: "here is a cipher, and here is the attack." And while
this gradually developed into named attacks, there is no overall
attack taxonomy. Currently, attacks are often classified by the
information available to the attacker or <I>constraints</I> on the
attack, and then by strategies which use the available information.
Not only
<A HREF = "#Cipher">ciphers</A>, but also cryptographic
<A HREF = "#Hash">hash</A> functions can be attacked, generally
with very different strategies.
<H4>Informational Constraints</H4>
<P>We are to attack a cipher which
<A HREF = "#Encipher">enciphers</A>
<A HREF = "#Plaintext">plaintext</A> into
<A HREF = "#Ciphertext">ciphertext</A> or
<A HREF = "#Decipher">deciphers</A> the opposite way, under
control of a
<A HREF = "#Key">key</A>. The available information necessarily
constrains our attack strategies.
<UL>
<LI><B>Ciphertext Only:</B> We have only ciphertext to work with.
Sometimes the statistics of the ciphertext provide insight and
can lead to a break.
<LI><B>Known Plaintext:</B> We have some, or even an extremely
large amount, of plaintext and the associated ciphertext.
<LI><B>Defined Plaintext:</B> We can submit arbitrary messages to
be ciphered and capture the resulting ciphertext.
(Also Chosen Plaintext and Adaptive Chosen Plaintext.)
<LI><B>Defined Ciphertext:</B> We can submit arbitrary messages
to be deciphered and see the resulting plaintext.
(Also Chosen Ciphertext and Adaptive Chosen Ciphertext.)
<LI><B>Chosen Key:</B> We can specify a change in any particular
key bit, or some other relationship between keys.
<LI><B>Timing:</B> We can measure the duration of ciphering
operations and use that to reveal the key or data.
<LI><B>Fault Analysis:</B> We can induce random faults into the
ciphering machinery, and use those to expose the key.
<LI><B>Man-in-the-Middle:</B> We can subvert the routing capabilities
of a computer network, and pose as the other side to each of the
communicators. (Usually a key authentication attack on
<A HREF = "#PublicKeyCipher">public key</A> systems.)
</UL>
<H4>Attack Strategies</H4>
<P>The goal of an attack is to reveal some unknown plaintext, or the
key (which will reveal the plaintext). An attack which succeeds with
less effort than a brute-force search we call a
<A HREF = "#Break">break</A>.
An "academic" ("theoretical," "certificational") break may involve
impractically large amounts of data or resources, yet still be called
a "break" if the attack would be easier than brute force.
(It is thus possible for a "broken" cipher to be much stronger than
a cipher with a short key.) Sometimes the attack strategy is thought
to be obvious, given a particular informational constraint,
and is not further classified.
<UL>
<LI><A HREF = "#BruteForceAttack"><B>Brute Force</B></A>
(also Exhaustive Key Search): Try to decipher ciphertext under
every possible key until readable messages are produced.
(Also "brute force" any searchable-size <I>part</I> of a
cipher.)
<LI><A HREF = "#CodebookAttack"><B>Codebook</B></A> (the classic
"codebreaking" approach): Collect a
<A HREF = "#Codebook">codebook</A> of transformations
between plaintext and ciphertext.
<LI><A HREF = "#DifferentialCryptanalysis"><B>Differential
Cryptanalysis:</B></A> Find a statistical correlation between
key values and cipher transformations (typically the
Exclusive-OR of text pairs), then use sufficient defined
plaintext to develop the key.
<LI><B>Linear Cryptanalysis:</B></A> Find a linear approximation
to the keyed S-boxes in a cipher, and use that to reveal
the key.
<LI><B>Meet-in-the-Middle:</B> Given a two-level multiple encryption,
search for the keys by collecting every possible result for
enciphering a known plaintext under the first cipher, and
deciphering the known ciphertext under the second cipher; then
find the match.
<LI><B>Key Schedule:</B> Choose keys which produce known effects
in different rounds.
<LI><A HREF = "#BirthdayAttack"><B>Birthday</B></A> (usually a hash
attack): Use the
<A HREF = "#BirthdayParadox">birthday paradox</A>, the idea that
it is much easier to find two values which match than it is to
find a match to some particular value.
<LI><B>Formal Coding</B> (also Algebraic): From the cipher design,
develop equations for the key in terms of known plaintext,
then solve those equations.
<LI><B>Correlation</B>: In a
<A HREF = "#StreamCipher">stream cipher</A>, distinguish between
data and confusion, or between different confusion streams, from
a statistical imbalance in a
<A HREF = "#Combiner">combiner</A>.
<LI><A HREF = "#DictionaryAttack"><B>Dictionary</B></A>: Form a list
of the most-likely keys, then try those keys one-by-one (a way to
improve brute force).
<LI><B>Replay</B>: Record and save some ciphertext blocks or messages
(especially if the content is known), then re-send those blocks
when useful.
</UL>
<P>Many attacks try to isolate unknown small components or aspects
so they can be solved separately, a process known as
<A HREF = "#DivideAndConquer">divide and conquer</A>. Also see:
<A HREF = "#Security">security</A>.
<A NAME = "AugmentedRepetitions"></A>
<P><DT><B>Augmented Repetitions</B>
<DD>When sampling with replacement, eventually we again find some
object or value which has been found before. We call such an
occurrence a "repetition." A value found exactly twice is a
double, or "2-rep"; a value found three times is a triple or
"3-rep," and so on.
<P>For a known
<A HREF = "#Population">population</A>, the number of repetitions
expected at each level has long been understood to be a
<A HREF = "#BinomialDistribution">binomial</A> expression.
But if we are sampling in an attempt to <I>establish</I> the
effective size of an unknown population, we have two problems:
<OL><P>
<LI>The binomial equations which predict expected repetitions
do not reverse well to predict population, and
<LI>Exact repetitions discard information and so are less
accurate than we would like. For example, if we have a
double and then find another of that value, we now have
a triple, and one <I>less</I> double. So if we are using
doubles to predict population, the occurrence of a triple
influences the predicted population in exactly the wrong
direction.
</OL>
<P>Fortunately, there is an unexpected and apparently previously
unknown combinatoric relationship between the population and the
number of combinations of occurrences of repeated values. This
allows us to convert any number of triples and higher <I>n</I>-reps
to the number of 2-reps which have the same probability. So if we
have a double, and then get another of the same value, we have a
triple, which we can convert into three 2-reps. The total number
of 2-reps from all repetitions (the <I>augmented 2-reps</I> value)
is then used to predict population.
<P>We can relate the number of samples <I>s</I> to the population
<I>N</I> through the expected number of augmented doubles
<I>Ead</I>:
<PRE>
Ead(N,s) = s(s-1) / 2N .
</PRE>
This equation is <B>exact</B>, <I>provided</I> we interpret all
the exact n-reps in terms of 2-reps. For example, a triple is
interpreted as three doubles; the augmentation from 3-reps to 2-reps
is (3 C 2) or 3. The augmented result is the sum of the
contributions from all higher repetition levels:
<PRE>
n i
ad = SUM ( ) r[i] .
i=2 2
</PRE>
where <I>ad</I> is the number of augmented doubles, and <I>r[i]</I>
is the exact repetition count at the <I>i</I>-th level.
<P>And this leads to an equation for predicting population:
<PRE>
Nad(s,ad) = s(s-1) / 2 ad .
</PRE>
This predicts the population <I>Nad</I> as based on a mean value
of augmented doubles <I>ad</I>. Clearly, we expect the number of
samples to be far larger than the number of augmented doubles, but
an error in the augmented doubles <I>ad</I> should produce a
proportionally similar error in the predicted population <I>Nad.</I>
We typically develop <I>ad</I> to high precision by averaging the
results of many large trials.
<P>However, since the trials should have approximately a simple
<A HREF = "#PoissonDistribution">Poisson distribution</A> (which has
only a single parameter), we could be a bit more clever and fit the
results to the expected distribution, thus perhaps developing a bit
more accuracy.
<P>Also see the article:
<A HREF = "http://www.io.com/~ritter/ARTS/BIRTHDAY.HTM">Estimating
Population from Repetitions in Accumulated Random Samples</A>, and the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/POPWKSHT.HTM">Population
Estimation Worksheets in JavaScript</A> page of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "Authentication"></A>
<P><DT><B>Authentication</B>
<DD>One of the objectives of
<A HREF = "#Cryptography">cryptography</A>: Assurance that a
message has not been modified in transit or storage (<I>message</I>
authentication or message <I>integrity</I>). Also
<A HREF = "#Key">key</A> authentication for
<A HREF = "#PublicKeyCipher">public keys</A>. Also user or source
identification, which may verify the right to send the message in
the first place.
<A NAME = "MessageAuthentication"></A>
<H4>Message <A HREF = "#Authentication">Authentication</A></H4>
<P>One form of message authentication computes a
<A HREF = "#CRC">CRC</A>
<A HREF = "#Hash">hash</A> across the
<A HREF = "#Plaintext">plaintext</A> data, and appends the CRC
remainder (or <I>result</I>) to the plaintext data: this adds
a computed redundancy to an arbitrary message.
The CRC result is then
<A HREF = "#Encipher">enciphered</A> along with the data. When
the message is
<A HREF = "#Decipher">deciphered</A>, if a second CRC operation
produces the same result, the message can be assumed unchanged.
<P>Note that a CRC is a fast,
<A HREF = "#Linear">linear</A> hash. Messages with particular CRC
result values can be constructed rather easily. However, if the CRC
is hidden behind strong ciphering, an
<A HREF = "#Opponent">Opponent</A> is unlikely to be able
to change the CRC value systematically or effectively. In particular,
this means that the CRC value will need more protection than a simple
<A HREF = "#ExclusiveOR">exclusive-OR</A>
<A HREF = "#StreamCipher">stream cipher</A> or the exclusive-OR
approach to handling short last
<A HREF = "#Block">blocks</A> in a
<A HREF = "#BlockCipher">block cipher</A>.
<P>A similar approach to message authentication uses a nonlinear
cryptographic hash function. These also add a computed redundancy
to the message, but generally require significantly more computation
than a CRC. It is thought to be exceedingly difficult to
construct messages with a particular cryptographic hash result,
so the hash result perhaps need not be hidden by encryption.
<P>One form of cryptographic hash is
<A HREF = "#DES">DES</A>
<A HREF = "#CBC">CBC</A> mode: using a key different than that used
for encryption, the final block of ciphertext is the hash of the
message. This obviously doubles the computation when both encryption
and authentication are needed. And since any cryptographic hash is
vulnerable to
<A HREF = "#BirthdayAttack">birthday attacks</A>, the small 64-bit
block size implies that we should be able to find two different
messages with the same hash value by constructing and hashing "only"
about 2<SUP>32</SUP> different messages.
<P>Another approach to message authentication is to use an
<A HREF = "#AuthenticatingBlockCipher">authenticating block cipher</A>;
this is often a
<A HREF = "#BlockCipher">block cipher</A> which has a large
<A HREF = "#Block">block</A>, with some "extra data" inserted in
an "authentication field" as part of the plaintext before
enciphering each block.
The "extra data" can be some transformation of the key, the
plaintext, and/or a sequence number. This essentially creates a
<A HREF = "#Homophonic">homophonic</A> block cipher: If we know
the key, many different ciphertexts will produce the same plaintext
field, but only one of those will have the correct authentication
field.
<P>The usual approach to authentication in a
<A HREF = "#PublicKeyCipher">public key cipher</A> is to encipher
with the private key. The resulting ciphertext can then be
deciphered by the public key, which anyone can know. Since even
the wrong key will produce a "deciphered" result, it is also
necessary to identify the resulting plaintext as a valid message;
in general this will also require redundancy in the form of a hash
value in the plaintext. The process provides no
<A HREF = "#Secrecy">secrecy</A>, but only a person with access to
the private key could have enciphered the message.
<A NAME = "UserAuthentication"></A>
<H4>User <A HREF = "#Authentication">Authentication</A></H4>
<P>The classical approach to user authentication is a
<A HREF = "#Password">password</A>;
this is "something you know." One can also make use of "something
you have" (such as a secure ID card), or "something you are"
(biometrics).
<P>The classic problem with passwords is that they must be
remembered by ordinary people, and so carry a limited amount of
uniqueness. Easy-to-remember passwords are often common language
phrases, and so often fall to a
<A HREF = "#DictionaryAttack">dictionary attack</A>. More
modern approaches involve using a Diffie-Hellman key exchange,
<I>plus</I> the password, thus minimizing exposure to a dictionary
attack. This does require a program on the user end, however.
<A NAME = "KeyAuthentication"></A>
<H4>Key <A HREF = "#Authentication">Authentication</A></H4>
<P>In
<A HREF = "#SecretKeyCipher">secret key ciphers</A>,
<A HREF = "#Key">key</A> authentication is <I>inherent</I> in
<A HREF = "#Security">secure</A>
<A HREF = "#KeyDistributionProblem">key distribution</A>.
<P>In
<A HREF = "#PublicKeyCipher">public key ciphers</A>, public keys
are exposed and often delivered insecurely. But someone who uses
the wrong key may unknowingly have "secure" communications with an
<A HREF = "#Opponent">Opponent</A>, as in a
<A HREF = "#ManInTheMiddleAttack">man-in-the-middle attack</A>.
It is thus absolutely crucial that public keys be authenticated
or <I>certified</I> as a separate process. Normally this implies
the need for a Certification Authority or CA.
<A NAME = "AuthenticatingBlockCipher"></A>
<P><DT><B>Authenticating Block Cipher</B>
<DD>A
<A HREF = "#BlockCipher">block cipher</A>
<A HREF = "#Mechanism">mechanism</A> which inherently contains an
<A HREF = "#Authentication">authentication</A> value or field.
<A NAME = "Autokey"></A>
<P><DT><B>Autokey</B>
<DD>A cipher whose key is produced by message data. One common
form is "ciphertext feedback," where
<A HREF = "#Ciphertext">ciphertext</A> is "fed back" into the
<A HREF = "#State">state</A> of the
<A HREF = "#RandomNumberGenerator">random number generator</A>
used to produce the
<A HREF = "#ConfusionSequence">confusion sequence</A> for a
<A HREF = "#StreamCipher">stream cipher</A>.
<A NAME = "Avalanche"></A>
<P><DT><B>Avalanche</B>
<DD>The observed property of a
<A HREF = "#BlockCipher">block cipher</A> constructed in
<A HREF = "#Layer">layers</A> or
"<A HREF = "#Round">rounds</A>" with respect to a tiny change
in the input. The
change of a single input bit generally produces multiple
bit-changes after one round, many more bit-changes after another
round, until, eventually, about half of the block will change.
An analogy is drawn to an avalanche in snow, where a small
initial effect can lead to a dramatic result.
As originally described by Feistel:
<BLOCKQUOTE>
"As the input moves through successive layers the pattern of
1's generated is amplified and results in an unpredictable
avalanche. In the end the final output will have, on average,
half 0's and half 1's . . . ." [p.22]
</BLOCKQUOTE>
<P>Feistel, H. 1973. Cryptography and Computer Privacy.
<I>Scientific American.</I> 228(5): 15-23.
<P>Also see
<A HREF = "#Mixing">mixing</A>,
<A HREF = "#Diffusion">diffusion</A>,
<A HREF = "#OverallDiffusion">overall diffusion</A>,
<A HREF = "#StrictAvalancheCriterion">strict avalanche criterion</A>,
<A HREF = "#Complete">complete</A>,
<A HREF = "#S-Box">S-box</A>, and the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/BINOMPOI.HTM#BitChanges">bit changes</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "AvalancheEffect"></A>
<P><DT><B>Avalanche Effect</B>
<DD>The result of
<A HREF = "#Avalanche">avalanche</A>.
As described by Webster and Tavares:
<BLOCKQUOTE>
"For a given transformation to exhibit the avalanche effect,
an average of one half of the output bits should change whenever
a single input bit is complemented." [p.523]
</BLOCKQUOTE>
<P>Webster, A. and S. Tavares. 1985. On the Design of
<A HREF = "#S-Box">S-Boxes</A>.
<I>Advances in Cryptology -- CRYPTO '85.</I> 523-534.
<P>Also see the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/BINOMPOI.HTM#BitChanges">bit changes</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "BackDoor"></A>
<P><DT><HR><P><B>Back Door</B>
<DD>A
<A HREF = "#Cipher">cipher</A> design fault, planned or accidental,
which allows the apparent strength of the design to be easily
avoided by those who know the trick. When the design background
of a cipher is kept secret, a back door is often suspected.
Similar to <A HREF = "#TrapDoor">trap door</A>.
<A NAME = "Balance"></A>
<P><DT><B>Balance</B>
<DD>A term used in
<A HREF = "#S-Box">S-box</A> and
<A HREF = "#BooleanFunction">Boolean function</A> analysis. As
described by Lloyd:
<BLOCKQUOTE>
"A function is balanced if, when all input vectors are equally
likely, then all output vectors are equally likely."
</BLOCKQUOTE>
<P>Lloyd, S. 1990. Properties of binary functions.
<I>Advances in Cryptology -- EUROCRYPT '90.</I> 124-139.
<P>There is some desire to generalize this definition to describe
multiple-input functions. (Is a function "balanced" if, for one
value on the first input, all output values can be produced, but
for another value on the first input, only <I>some</I> output values
are possible?) Presumably a two-input balanced function would
be balanced for either input fixed at any value, which would
essentially be a
<A HREF = "#LatinSquare">Latin square</A> or a
<A HREF = "#LatinSquareCombiner">Latin square combiner</A>.
<A NAME = "BalancedBlockMixer"></A>
<P><DT><B>Balanced Block Mixer</B>
<DD>A process or any implementation (for example,
<A HREF = "#Hardware">hardware</A>,
<A HREF = "#Computer">computer</A>
<A HREF = "#Software">software</A>, hybrids, or the like) for performing
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A>.
<A NAME = "BalancedBlockMixing"></A>
<P><DT><B>Balanced Block Mixing</B>
<DD>The
<A HREF = "#Block">block</A>
<A HREF = "#Mixing">mixing</A>
<A HREF = "#Mechanism">mechanism</A>
described in U.S. Patent 5,623,549 (see the
<A HREF = "http://www.io.com/~ritter/#BBMTech">BBM articles</A> on the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page).
<P>A
<A HREF = "#Balance">Balanced</A>
Block Mixer is an <I>m</I>-input-port <I>m</I>-output-port
mechanism with various properties:
<OL>
<P><LI>The overall mapping is one-to-one and invertible: Every
possible input value (over all ports) to the mixer produces
a different output value (including all ports), and every
possible output value is produced by a different input value;
<P><LI>Each output port is a function of every input port;
<P><LI>Any change to any one of the input ports will produce a
change to every output port;
<P><LI>Stepping any one input port through all possible values
(while keeping the other input ports fixed) will step every
output port through all possible values.
</OL>
<P>If we have a two port mixer, with input ports labeled <I>A</I>
and <I>B,</I> output ports labeled <I>X</I> and <I>Y,</I> and some
<A HREF = "#Irreducible">irreducible</A>
<A HREF = "#Mod2Polynomial">mod 2 polynomial</A>
<I>p</I> of degree appropriate to the port size, a Balanced Block Mixer
is formed by the equations:
<BLOCKQUOTE>
X = 3A + 2B (mod 2)(mod p),<BR>
Y = 2A + 3B (mod 2)(mod p).
</BLOCKQUOTE>
<P>This particular BBM is a self-inverse or
<A HREF = "#Involution">involution</A>, and so can be used without
change whether enciphering or deciphering.
One possible value for <I>p</I> for mixing 8-bit values is 100011011.
<P>Balanced Block Mixing functions probably should be thought of as
<A HREF = "#OrthogonalLatinSquares">orthogonal Latin squares</A>.
For example, here is a tiny nonlinear "2-bit" BBM:
<PRE>
3 1 2 0 0 3 2 1 30 13 22 01
0 2 1 3 2 1 0 3 = 02 21 10 33
1 3 0 2 1 2 3 0 11 32 03 20
2 0 3 1 3 0 1 2 23 00 31 12
</PRE>
<P>Suppose we wish to mix (1,3); 1 selects the second row up in both
squares, and 3 selects the rightmost column, thus selecting (2,0)
as the output. Since there is only one occurrence of (2,0) among
all entry pairs, this discrete mixing function is reversible, as well
as being balanced on both inputs.
<P>Cryptographic advantages of balanced block mixing include the fact
that each output is always balanced with respect to either input, and
that no information is lost in the mixing. This allows us to use
balanced block mixing as the "butterfly" operations in a
<A HREF = "#FastWalshTransform">fast Walsh-Hadamard transform</A>
or the well-known
<A HREF = "#FFT">FFT</A>. By using the mixing patterns of these
transforms, we can mix 2<SUP>n</SUP> elements such that each input
is guaranteed to affect each and every output in a balanced way.
And if we use
<A HREF = "#Key">keying</A> to generate the tables, we can have a
way to mix huge blocks in small nonlinear mixing tables with
overall mixing guarantees.
<P>Also see
<A HREF = "#MixingCipher">Mixing Cipher</A>,
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution Combiner</A>,
<A HREF = "#VariableSizeBlockCipher">Variable Size Block Cipher</A>, and the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/ACTIVBBM.HTM">Active
Balanced Block Mixing in JavaScript</A> page of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "BalancedCombiner"></A>
<P><DT><B>Balanced Combiner</B>
<DD>In the context of
<A HREF = "#Cryptography">cryptography</A>, a
<A HREF = "#Combiner">combiner</A>
<A HREF = "#Mixing">mixes</A> two input
values into a result value. A balanced combiner must provide a
<A HREF = "#Balance">balanced</A> relationship between each input
and the result.
<P>In a <I>statically-balanced</I> combiner, any particular result
value can be produced by any value on one input, simply by
selecting some appropriate value for the other input. In this way,
knowledge of only the output value provides no information -- not
even statistical information -- about either input.
<P>The common examples of cryptographic combiner, including byte
<A HREF = "#ExclusiveOR">exclusive-OR</A>
(<A HREF = "#Mod2">mod 2</A>
<A HREF = "#Polynomial">polynomial</A> addition), byte addition
(integer addition <A HREF = "#Modulo">mod</A> 256), or other
<A HREF = "#AdditiveCombiner">"additive" combining</A>, are
perfectly balanced. Unfortunately, these simple combiners are
also very weak, being inherently
<A HREF = "#Linear">linear</A> and without internal
<A HREF = "#State">state</A>.
<P>A <A HREF = "#LatinSquareCombiner">Latin square combiner</A>
is an example of a statically-balanced
reversible nonlinear combiner with massive internal state.
A <A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution
Combiner</A> is an example of a dynamically or
statistically-balanced reversible nonlinear combiner with
substantial internal state.
<A NAME = "Base64"></A>
<P><DT><B>Base-64</B>
<DD>A public
<A HREF = "#Code">code</A> for converting between
6-<A HREF = "#Bit">bit</A> values 0..63 (or 00..3f
<A HREF = "#Hexadecimal">hex</A>) and text symbols accepted
by most computers:
<PRE>
0 1 2 3 4 5 6 7 8 9 a b c d e f
0 A B C D E F G H I J K L M N O P
1 Q R S T U V W X Y Z a b c d e f
2 g h i j k l m n o p q r s t u v
3 w x y z 0 1 2 3 4 5 6 7 8 9 + /
use "=" for padding
</PRE>
<A NAME = "Bel"></A>
<P><DT><B>Bel</B>
<DD>The base-10 logarithm of the ratio of two
<A HREF = "#Power">power</A> values (which is also the same as
the difference between the log of each power value). The basis
for the more-common term
<A HREF = "#Decibel">decibel</A>: One bel equals 10 decibels.
<A NAME = "BentFunction"></A>
<P><DT><B>Bent Function</B>
<DD>A bent function is a
<A HREF = "#BooleanFunction">Boolean function</A> whose
<A HREF = "#FastWalshTransform">fast Walsh transform</A> has the same
absolute value in each term (except, possibly, the zeroth). This
means that the bent function has the same
<A HREF = "#HammingDistance">distance</A> from every possible
<A HREF = "#AffineBooleanFunction">affine Boolean function</A>.
<P>We can do FWT's in "the bottom panel" at the end of
<A HREF = "http://www.io.com/~ritter/JAVASCRP/NONLMEAS.HTM">Active
Boolean Function Nonlinearity Measurement in JavaScript</A> page of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<P>Here is every bent sequence of length 4, first in {0,1} notation,
then in {1,-1} notation, with their FWT results:
<PRE>
bent {0,1} FWT bent {1,-1} FWT
0 0 0 1 1 -1 -1 1 1 1 1 -1 2 2 2 -2
0 0 1 0 1 1 -1 -1 1 1 -1 1 2 -2 2 2
0 1 0 0 1 -1 1 -1 1 -1 1 1 2 2 -2 2
1 0 0 0 1 1 1 1 -1 1 1 1 2 -2 -2 -2
1 1 1 0 3 1 1 -1 -1 -1 -1 1 -2 -2 -2 2
1 1 0 1 3 -1 1 1 -1 -1 1 -1 -2 2 -2 2
1 0 1 1 3 1 -1 1 -1 1 -1 -1 -2 -2 2 -2
0 1 1 1 3 -1 -1 -1 1 -1 -1 -1 -2 2 2 2
</PRE>
These sequences, like all true bent sequences, are <B>not</B>
<A HREF = "#Balance">balanced</A>, and the zeroth element of the
{0,1} FWT is the number of 1's in the sequence.
<P>Here are some bent sequences of length 16:
<PRE>
bent {0,1} 0 1 0 0 0 1 0 0 1 1 0 1 0 0 1 0
FWT 6,-2,2,-2,2,-2,2,2,-2,-2,2,-2,-2,2,-2,-2
bent {1,-1} 1 -1 1 1 1 -1 1 1 -1 -1 1 -1 1 1 -1 1
FWT 4,4,-4,4,-4,4,-4,-4,4,4,-4,4,4,-4,4,4
bent {0,1} 0 0 1 0 0 1 0 0 1 0 0 0 1 1 1 0
FWT 6,2,2,-2,-2,2,-2,2,-2,-2,-2,-2,2,2,-2,-2
bent {1,-1} 1 1 -1 1 1 -1 1 1 -1 1 1 1 -1 -1 -1 1
FWT 4,-4,-4,4,4,-4,4,-4,4,4,4,4,-4,-4,4,4
</PRE>
<P>Bent sequences are said to have the highest possible uniform
nonlinearity. But, to put this in perspective, recall that we
<I>expect</I> a random sequence of 16 bits to have 8 bits different
from any particular sequence, linear or otherwise. That is also
the <I>maximum possible</I> nonlinearity, and here we actually
<I>get</I> a nonlinearity of 6.
<P>There are various more or less complex constructions for these
sequences. In most cryptographic uses, bent sequences are modified
slightly to achieve balance.
<A NAME = "BernoulliTrials"></A>
<P><DT><B>Bernoulli Trials</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, observations or sampling
with replacement which has exactly two possible outcomes, typically
called "success" and "failure." Bernoulli trials have these
characteristics:
<UL>
<LI>Each trial is independent,
<LI>Each outcome is determined only by chance, and
<LI>The probability of success is fixed.
</UL>
<P>Bernoulli trials have a
<A HREF = "#BinomialDistribution">Binomial distribution</A>.
<A NAME = "Bijective"></A>
<P><DT><B>Bijective</B>
<DD>A <A HREF = "#Mapping">mapping</A> f: <I>X -> Y</I> which is
both
<A HREF = "#OneToOne">one-to-one</A> and
<A HREF = "#Onto">onto</A>.
For each unique <I>x</I> in <I>X</I> there is corresponding
unique <I>y</I> in <I>Y</I>.
An invertible mapping function.
<A NAME = "Binary"></A>
<P><DT><B>Binary</B>
<DD>From the Latin for "dual" or "pair." Dominantly used to indicate
"base 2": The numerical representation in which each digit has an
<A HREF = "#Alphabet">alphabet</A> of only two symbols: 0 and 1.
This is just one particular
<A HREF = "#Code">coding</A> or representation of a value which
might otherwise be represented (with the exact same value) as
<A HREF = "#Octal">octal</A> (base 8),
<A HREF = "#Decimal">decimal</A> (base 10), or
<A HREF = "#Hexadecimal">hexadecimal</A> (base 16). Also see
<A HREF = "#Bit">bit</A> and
<A HREF = "#Boolean">Boolean</A>.
<P>Possibly also the confusing counterpart to
<A HREF = "#Unary">unary</A> when describing the number of inputs
or arguments to a function, but
<A HREF = "#Dyadic">dyadic</A> is almost certainly a better choice.
<A NAME = "BinomialDistribution"></A>
<P><DT><B>Binomial Distribution</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the probability of finding
exactly <I>k</I> successes in <I>n</I> independent
<A HREF = "#BernoulliTrials">Bernoulli trials</A>, when
each trial has success probability <I>p</I>:
<PRE>
n k n-k
P(k,n,p) = ( ) p (1-p)
k
</PRE>
<P>This ideal
<A HREF = "#Distribution">distribution</A> is produced by evaluating
the probability function for all possible <I>k,</I> from 0 to
<I>n.</I>
<P>If we have an experiment which we think <I>should</I> produce a
binomial distribution, and then repeatedly and systematically find
very improbable test values, we may choose to reject the
<A HREF = "#NullHypothesis">null hypothesis</A> that the experimental
distribution is in fact binomial.
<P>Also see the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/BINOMPOI.HTM#Binomial">binomial</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "BirthdayAttack"></A>
<P><DT><B>Birthday Attack</B>
<DD>A form of
<A HREF = "#Attack">attack</A> in which it is necessary to obtain
two identical values from a large
<A HREF = "#Population">population</A>. The "birthday"
part is the realization that it is far easier to find an arbitrary
matching pair than to match any particular value. Often a
<A HREF = "#Hash">hash</A> attack.
<P>Also see:
<A HREF = "#BirthdayParadox">birthday paradox</A>.
<A NAME = "BirthdayParadox"></A>
<P><DT><B>Birthday Paradox</B>
<DD>The apparent paradox that, in a schoolroom of only 23 students,
there is a 50 percent probability that at least two will have the
same birthday. The "paradox" is that we have an even chance of
success with at most 23 different days represented.
<P>The "paradox" is resolved by noting that we have a 1/365 chance
of success for each possible <I>pairing</I> of students, and there
are 253 possible pairs or
<A HREF = "#Combination">combinations</A> of 23 things taken 2 at
a time. (To count the number of pairs, we can choose any of the 23
students as part of the pair, then any of the 22 remaining students
as the other part. But this counts each pair twice, so we have
<NOBR>23 * 22 / 2 = 253</NOBR> different pairs.)
<P>We can compute the overall probability of success from the
probability of <I>failure</I> <NOBR>(1 - 1/365 = 0.99726)</NOBR>
multiplied by itself for each pair. The overall probability of
failure is thus 0.99726<SUP>253</SUP> (0.99726 to the 253rd power)
or 0.4995. So the success probability for 253 pairs is 0.5005.
<P>We can relate the probability of finding at least one "double"
of some birthday (Pd) to the expected number of doubles (Ed) as:
<PRE>
<BIG><TTY>Pd = 1 - e<SUP>-Ed</SUP></TTY></BIG> ,
so
<BIG><TTY>Ed = -Ln( 1 - Pd )</TTY></BIG>
and
<BIG><TTY>365 * -Ln( 0.5 ) = 365 * 0.693 = 253</TTY></BIG> .
</PRE>
<P>Also see:
<A HREF = "ARTS/BIRTHDAY.HTM">Estimating Population from Repetitions
in Accumulated Random Samples</A>, my "birthday" article.
<A NAME = "Bit"></A>
<P><DT><B>Bit</B>
<DD>A contraction of
"<A HREF = "#Binary">binary</A> digit." The smallest possible unit
of information. A
<A HREF = "#Boolean">Boolean</A> value: True or False; Yes or No;
one or zero; Set or Cleared. Virtually all information to be
communicated or stored digitally is
<A HREF = "#Code">coded</A> in some way which
fundamentally relies on individual bits. Alphabetic characters
are often stored in eight bits, which is a
<A HREF = "#Byte">byte</A>.
<A NAME = "Block"></A>
<P><DT><B>Block</B>
<DD>Some amount of data treated as a single unit. For example, the
<A HREF = "#DES">DES</A>
<A HREF = "#BlockCipher">block cipher</A> has a
64-<A HREF = "#Bit">bit</A> block. So DES ciphers 64 bits (8
<A HREF = "#Byte">bytes</A> or typically 8
<A HREF = "#ASCII">ASCII</A> characters) at once.
<P>A 64-bit block supports 2<SUP>64</SUP> or about
<NOBR>1.8 x 10<SUP>19</SUP></NOBR> block values or code values.
Each different
<A HREF = "#Permutation">permutation</A> of those values can be
considered a complete
<A HREF = "#Code">code</A>. A block cipher has the ability to
select from among many such codes using a
<A HREF = "#Key">key</A>.
<P>It is not normally possible to block-cipher just a single
<A HREF = "#Bit">bit</A> or a single
<A HREF = "#Byte">byte</A> of a block.
An arbitrary stream of data can always be partitioned into one
or more fixed-size blocks, but it is likely that at least one
block will not be completely filled. Using fixed-size blocks
generally means that the associated system must support data
expansion in enciphering, if only by one block. Handling even
minimal data expansion may be difficult in some systems.
<A NAME = "BlockCipher"></A>
<P><DT><B>Block Cipher</B>
<DD>A
<A HREF = "#Cipher">cipher</A> which requires the accumulation
of data (in a
<A HREF = "#Block">block</A>) before ciphering can complete.
Other than simple
<A HREF = "#Transposition">transposition</A> ciphers, this seems
to be the province of ciphers designed to <I>emulate</I> a
<A HREF = "#Key">keyed</A>
<A HREF = "#SimpleSubstitution">simple substitution</A> with a
<A HREF = "#SubstitutionTable">table</A> of size far too large to
realize. A block cipher operates on a
<A HREF = "#Block">block</A> of data (for example, multiple
<A HREF = "#Byte">bytes</A>) in a single ciphering, as opposed to a
<A HREF = "#StreamCipher">stream cipher</A>, which operates on
bytes or
<A HREF = "#Bit">bits</A> as they occur. Block ciphers can be called
"<A HREF = "#Codebook">codebook</A>-style" ciphers. Also see
<A HREF = "#VariableSizeBlockCipher">Variable Size Block Cipher</A>.
<P>A <A HREF = "#BlockCipher">block cipher</A> is a transformation
between
<A HREF = "#Plaintext">plaintext</A> block values and
<A HREF = "#Ciphertext">ciphertext</A> block values, and is thus an
emulated
<A HREF = "#SimpleSubstitution">simple substitution</A> on huge
block-wide values. Within a particular block size, both plaintext
and ciphertext have the same set of possible values, and when the
ciphertext values have the same ordering as the plaintext, ciphering
is obviously ineffective. So <I>effective</I> ciphering depends upon
<I>re-arranging</I> the ciphertext values from the plaintext
ordering, and this is a
<A HREF = "#Permutation">permutation</A> of the plaintext values.
A block cipher is <A HREF = "#Key">keyed</A> by constructing a
<I>particular</I> permutation of ciphertext values.
<H4>Block Cipher Data Diffusion</H4>
<P>In an ideal block cipher, changing even a single bit of the
input block will change all bits of the ciphertext result, each
with independent probability 0.5. This means that about half of
the bits in the output will change for any different input block,
even for differences of just one bit. This is
<A HREF = "#OverallDiffusion">overall diffusion</A> and is
present in a block cipher, but not in a
<A HREF = "#StreamCipher">stream cipher</A>. Data diffusion is
a simple consequence of the keyed invertible simple substitution
nature of the ideal block cipher.
<P>Improper diffusion of data throughout a block cipher can have
serious strength implications. One of the functions of data
diffusion is to hide the different effects of different internal
components. If these effects are not in fact hidden, it may be
possible to attack each component separately, and break the
whole cipher fairly easily.
<H4>Partitioning Messages into Fixed Size Blocks</H4>
<P>A large message can be ciphered by partitioning the plaintext
into blocks of a size which can be ciphered. This essentially
creates a stream meta-cipher which repeatedly uses the same block
cipher transformation. Of course, it is also possible to re-key
the block cipher for each and every block ciphered, but this is
usually expensive in terms of computation and normally unnecessary.
<P>A message of arbitrary size can always be partitioned into some
number of whole blocks, with possibly some space remaining in the
final block. Since partial blocks cannot be ciphered, some
random
<A HREF = "#Padding">padding</A> can be introduced to fill out the
last block, and this naturally expands the ciphertext. In this
case it may also be necessary to introduce some sort of structure
which will indicate the number of valid bytes in the last block.
<H4>Block Partitioning without Expansion</H4>
<P>Proposals for using a block cipher supposedly <I>without</I>
data expansion may involve creating a tiny
<A HREF = "#StreamCipher">stream cipher</A> for the last block.
One scheme is to re-encipher the ciphertext of the preceding
block, and use the result as the
<A HREF = "#ConfusionSequence">confusion sequence</A>. Of course,
the cipher designer still needs to address the situation of files
which are so short that they <I>have</I> no preceding block.
Because the one-block version is <I>in fact</I> a stream cipher,
we must be very careful to never re-use a confusion sequence.
But when we only <I>have</I> one block, there <I>is</I> no prior
block to change as a result of the data. In this case, ciphering
several very short files could expose those files quickly.
Furthermore, it is dangerous to encipher a
<A HREF = "#CRC">CRC</A> value in such a block, because
exclusive-OR enciphering is transparent to the field of mod 2
polynomials in which the CRC operates. Doing this could allow an
Opponent to adjust the message CRC in a known way, thus avoiding
authentication exposure.
<P>Another proposal for eliminating data expansion consists of
ciphering blocks until the last short block, then re-positioning
the ciphering window to end at the last of the data, thus
re-ciphering part of the prior block. This is a form of chaining
and establishes a sequentiality requirement which requires that
the last block be deciphered <I>before</I> the next-to-the-last
block. Or we can make enciphering inconvenient and deciphering
easy, but one way will be a problem. And this approach cannot
handle very short messages: its minimum size is one block. Yet
any general-purpose ciphering routine <I>will</I> encounter short
messages. Even worse, if we have a short message, we still need
to somehow indicate the correct length of the message, and this
must expand the message, as we saw before. Thus, overall, this
seems a somewhat dubious technique.
<P>On the other hand, it does show a way to chain blocks for
authentication in a large-block cipher: We start out by
enciphering the data in the first block. Then we position the
next ciphering to start <I>inside</I> the ciphertext of the previous
block. Of course this would mean that we would have to decipher
the message in reverse order, but it would also propagate any
ciphertext changes through the end of the message. So if we add
an authentication field at the end of the message (a keyed value
known on both ends), and that value is recovered upon deciphering
(this will be the first block deciphered) we can authenticate the
whole message. But we still need to handle the last block
padding problem and possibly also the short message problem.
<H4>Block Size and Plaintext Randomization</H4>
<P>Ciphering raw plaintext data can be dangerous when the cipher
has a small block size. Language plaintext has a strong, biased
distribution of symbols and ciphering raw plaintext would
effectively reduce the number of possible plaintexts blocks.
Worse, some plaintexts would be vastly more probable than others,
and if some
<A HREF = "#KnownPlaintextAttack">known plaintext</A> were available,
the most-frequent blocks might already be known. In this way,
small blocks can be vulnerable to classic
<A HREF = "#CodebookAttack">codebook attacks</A> which
build up the ciphertext equivalents for many of the plaintext
phrases. This sort of attack confronts a particular block size,
and for these attacks Triple-DES is no stronger than simple DES,
because they both have the same block size.
<P>The usual way of avoiding these problems is to randomize
the plaintext block with an
<A HREF = "#OperatingMode">operating mode</A> such as
<A HREF = "#CBC">CBC</A>. This can ensure that the plaintext
data which is actually ciphered is evenly distributed across
all possible block values. However, this also requires an
<A HREF = "#IV">IV</A> which thus expands the ciphertext.
<P>Another approach is to apply data compression to the plaintext
before enciphering. If this is to be used <I>instead</I> of
plaintext randomization, the designer must be very careful that
the data compression does not contain regular features which
could be exploited by The Opponents.
<P>An alternate approach is to use blocks of sufficient size
for them to be expected to have a substantial amount of uniqueness
or "entropy." If we expect plaintext to have about one bit of
entropy per byte of text, we might want a block size of at
least 64 bytes before we stop worrying about an uneven
distribution of plaintext blocks. This is now a practical
block size.
<A NAME = "Boolean"></A>
<P><DT><B>Boolean</B>
<DD>TRUE or FALSE; one
<A HREF = "#Bit">bit</A> of information.
<A NAME = "BooleanFunction"></A>
<P><DT><B>Boolean Function</B>
<DD>A function which produces a
<A HREF = "#Boolean">Boolean</A> result</A>. The individual output
<A HREF = "#Bit">bits</A> of an
<A HREF = "#S-Box">S-box</A> can each be considered to be
separate Boolean functions.
<A NAME = "BooleanFunctionNonlinearity"></A>
<P><DT><B>Boolean Function Nonlinearity</B>
<DD>The number of
<A HREF = "#Bit">bits</A> which must change in the
<A HREF = "#TruthTable">truth table</A> of a
<A HREF = "#BooleanFunction">Boolean function</A> to reach the closest
<A HREF = "#AffineBooleanFunction">affine Boolean function</A>.
This is the
<A HREF = "#HammingDistance">Hamming distance</A> from the closest
"<A HREF = "#Linear">linear</A>" function.
<P>Typically computed by using a
<A HREF = "#FastWalshTransform">fast Walsh-Hadamard transform</A>
on the
<A HREF = "#Boolean">Boolean</A>-valued truth table of the function.
This produces the
<A HREF = "#UnexpectedDistance">unexpected distance</A> to every
possible affine Boolean function (of the given length). Scanning
those results for the maximum value implies the minimum distance
to some particular affine sequence.
<P>Especially useful in
<A HREF = "#S-Box">S-box</A> analysis, where the nonlinearity for
the
<A HREF = "#SubstitutionTable">table</A> is often taken to be the
minimum of the nonlinearity values computed for each output bit.
<P>Also see the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/NONLMEAS.HTM">Active
Boolean Function Nonlinearity Measurement in JavaScript</A> page of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "BooleanLogic"></A>
<P><DT><B>Boolean Logic</B>
<DD>The
<A HREF = "#Logic">logic</A> which applies to variables which have
only two possible values. Also the
<A HREF = "#Digital">digital</A>
<A HREF = "#Hardware">hardware</A> devices which realize such
logic, and are used to implement a
<A HREF = "#Electronic">electronic</A> digital
<A HREF = "#Computer">computers</A>.
<A NAME = "BooleanMapping"></A>
<P><DT><B>Boolean Mapping</B>
<DD>A
<A HREF = "#Mapping">mapping</A> of some number <I>n</I>
<A HREF = "#Boolean">Boolean</A> variables into
some number <I>m</I> Boolean results.
For example, an
<A HREF = "#S-Box">S-box</A>.
<A NAME = "Break"></A>
<P><DT><B>Break</B>
<DD>The result of a successful
<A HREF = "#Cryptanalysis">cryptanalytic</A>
<A HREF = "#Attack">attack</A>.
To destroy the advantage of a
<A HREF = "#Cipher">cipher</A>
in hiding information.
<P>A
<A HREF = "#Cipher">cipher</A> is "broken" when the information in
a message can be extracted without the
<A HREF = "#Key">key</A>, or when the key itself can be recovered.
The <A HREF = "#Strength">strength</A> of a cipher can be considered
to be the minimum effort required for a break, by any possible
attack. A break is particularly significant when the work involved
need not be repeated on every message.
<P>The use of the term "break" can be misleading when an impractical
amount of work is required to achieve the break. This case might
be better described a "theoretical" or "certificational"
<I>weakness</I>.
<A NAME = "BlockSize"></A>
<P><DT><B>Block Size</B>
<DD>The amount of data in a
<A HREF = "#Block">block</A>. For example, the size of the
<A HREF = "#DES">DES</A> block is 64
<A HREF = "#Bit">bits</A> or 8
<A HREF = "#Byte">bytes</A> or 8 octets.
<A NAME = "BruteForceAttack"></A>
<P><DT><B>Brute Force Attack</B>
<DD>A form of
<A HREF = "#Attack">attack</A> in which each possibility is tried
until success is obtained. Typically, a
<A HREF = "#Ciphertext">ciphertext</A> is
<A HREF = "#Decipher">deciphered</A>
under different
<A HREF = "#Key">keys</A> until
<A HREF = "#Plaintext">plaintext</A> is recognized. On average,
this may take about half as many decipherings as there are keys.
<P>Recognizing plaintext may or may not be easy. Even when the
key length of a cipher is sufficient to prevent brute force attack,
that key will be far too small to produce every possible plaintext
from a given ciphertext (see
<A HREF = "#PerfectSecrecy">perfect secrecy</A>). Combined with
the fact that language is redundant, this means that very few of
the decipherings will be words in proper form. Of course, if the
plaintext is not language, but is instead computer code, compressed
text, or even ciphertext from another cipher, recognizing a correct
deciphering can be difficult.
<P>Brute force is the obvious way to attack a cipher, and the way
any cipher can be attacked, so ciphers are designed to have a large
enough
<A HREF = "#Keyspace">keyspace</A> to make this much too expensive
to use in practice. Normally, the design
<A HREF = "#Strength">strength</A> of a cipher is based on the
cost of a brute-force attack.
<A NAME = "Bug"></A>
<P><DT><B>Bug</B>
<DD>Technical slang for "error in design or implementation."
An unexpected
<A HREF = "#System">system</A> flaw.
<A HREF = "#Debug">Debugging</A> is a normal part of system
development and interactive
<A HREF = "#SystemDesign">system design</A>.
<A NAME = "Byte"></A>
<P><DT><B>Byte</B>
<DD>A collection of eight
<A HREF = "#Bit">bits</A>. Also called an "octet." A byte
can represent 256 different values or symbols. The common 7-bit
<A HREF = "#ASCII">ASCII</A> codes used to represent characters
in
<A HREF = "#Computer">computer</A> use are generally stored
in a byte; that is, one byte per character.
<A NAME = "Capacitor"></A>
<P><DT><HR><P><B>Capacitor</B>
<DD>A basic
<A HREF = "#Electronic">electronic</A>
<A HREF = "#Component">component</A>
which acts as a reservoir for electrical power in the form of
<A HREF = "#Voltage">voltage</A>.
A capacitor thus acts to "even out" the voltage across its terminals,
and to "conduct" voltage changes from one terminal to the other.
A capacitor "blocks"
<A HREF = "#DC">DC</A> and conducts
<A HREF = "#AC">AC</A> in proportion to
<A HREF = "#Frequency">frequency</A>.
Capacitance is measured in Farads: A
<A HREF = "#Current">current</A> of 1 Amp into a capacitance
of 1 Farad produces a voltage change of 1 Volt per Second across
the capacitor.
<P>Typically, two
<A HREF = "#Conductor">conductive</A> "plates" or metal foils
separated by a thin
<A HREF = "#Insulator">insulator</A>, such as air, paper, or
ceramic.
An electron charge on one plate attracts the opposite charge on the
other plate, thus "storing" charge.
A capacitor can be used to collect a small current over long time,
and then release a high current for a short time, as used in a
camera strobe or "flash."
<P>Also see
<A HREF = "#Inductor">inductor</A> and
<A HREF = "#Resistor">resistor</A>.
<A NAME = "CBC"></A>
<P><DT><B>CBC</B>
<DD>CBC or Cipher Block Chaining is an
<A HREF = "#OperatingMode">operating mode</A> for
<A HREF = "#BlockCipher">block ciphers</A>.
CBC mode is essentially a crude
meta-<A HREF = "#StreamCipher">stream cipher</A> which streams block
transformations.
<P>In CBC mode the
<A HREF = "#Ciphertext">ciphertext</A> value of the preceding
<A HREF = "#Block">block</A> is
<A HREF = "#ExclusiveOR">exclusive-OR</A> combined with the
<A HREF = "#Plaintext">plaintext</A> value for the current block.
This has the effect of distributing the combined block values
evenly among all possible block values, and so prevents
<A HREF = "#CodebookAttack">codebook attacks</A>.
<P>On the other hand, ciphering the <I>first</I> block generally
requires an
<A HREF = "#IV">IV</A> or initial value to start the process.
The IV necessarily expands the ciphertext, which may or may not
be a problem.
And the IV must be dynamically random-like so that statistics
cannot be developed on the first block of each message sent under
the same key.
<P>In CBC mode, each random-like confusing value is the ciphertext
from each previous block. Clearly this ciphertext is exposed to
The Opponent, so there would seem to be little benefit associated
with hiding the IV, which is just the first of these values.
But if The Opponent knows the first sent plaintext, and can
intercept and change the message IV, The Opponent can manipulate
the first block of received plaintext. Because the IV does not
represent a message enciphering, manipulating this value does not
also change any previous block.
<P>Accordingly, the IV may be sent enciphered or may be specifically
authenticated in some way. Alternately, the complete body of the
plaintext message may be
<A HREF = "#Authentication">authenticated</A>, often by a
<A HREF = "#CRC">CRC</A>. The CRC remainder should be block ciphered,
perhaps as part of the plaintext.
<A NAME = "cdf"></A>
<P><DT><B>c.d.f.</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, <I>cumulative
<A HREF = "#Distribution">distribution</A> function.</I>
A function which gives the probability of obtaining a particular
value or lower.
<A NAME = "CFB"></A>
<P><DT><B>CFB</B>
<DD>CFB or Ciphertext FeedBack is an
<A HREF = "#OperatingMode">operating mode</A> for a
<A HREF = "#BlockCipher">block cipher</A>.
<P>CFB is closely related to
<A HREF = "#OFB">OFB</A>, and is intended to provide some of the
characteristics of a
<A HREF = "#StreamCipher">stream cipher</A> from a block cipher.
CFB generally forms an
<A HREF = "#Autokey">autokey</A> stream cipher.
CFB is a way of using a block cipher to form a
<A HREF = "#RandomNumberGenerator">random number generator</A>.
The resulting
<A HREF = "#PseudoRandom">pseudorandom</A>
<A HREF = "#ConfusionSequence">confusion sequence</A> can be
<A HREF = "#Combiner">combined</A> with data as in the usual
stream cipher.
<P>CFB assumes a
<A HREF = "#ShiftRegister">shift register</A> of the block cipher
<A HREF = "#Block">block</A> size. An
<A HREF = "#IV">IV</A> or initial value first fills the register,
and then is ciphered. Part of the result, often just a single
<A HREF = "#Byte">byte</A>, is used to cipher data, and the
resulting
<A HREF = "#Ciphertext">ciphertext</A> is also
shifted into the register. The new register value is ciphered,
producing another confusion value for use in stream ciphering.
<P>One disadvantage of this, of course, is the need for a full
block-wide ciphering operation, typically for each data byte
ciphered. The advantage is the ability to cipher individual
characters, instead of requiring accumulation into a block
before processing.
<A NAME = "Chain"></A>
<P><DT><B>Chain</B>
<DD>An operation repeated in a sequence, such that each result
depends upon the previous result, or an
<A HREF = "#IV">initial value</A>.
One example is the
<A HREF = "#CBC">CBC</A> operating mode.
<A NAME = "Chaos"></A>
<P><DT><P><B>Chaos</B>
<DD>The unexpected ability to find numerical relationships in
physical processes formerly considered
<A HREF = "#Random">random</A>. Typically these take the form
of iterative applications of fairly simple computations.
In a chaotic system, even tiny changes in
<A HREF = "#State">state</A> eventually lead to major changes
in state; this is called "sensitive dependence on initial
conditions." It has been argued that every good computational
<A HREF = "#RandomNumberGenerator">random number generator</A>
is "chaotic" in this sense.
<P>In physics, the "state" of an
<A HREF = "#Analog">analog</A> physical system cannot be
fully measured, which always leaves some remaining uncertainty to
be magnified on subsequent steps. And, in many cases, a physical
system may be slightly affected by thermal noise and thus continue
to accumulate new information into its "state."
<P>In a
<A HREF = "#Computer">computer</A>, the state of the
<A HREF = "#Digital">digital</A>
<A HREF = "#System">system</A> is explicit and
complete, and there is no uncertainty. No noise is accumulated.
All operations are completely
<A HREF = "#Deterministic">deterministic</A>. This means that, in a
computer, even a "chaotic" computation is completely predictable
and repeatable.
<A NAME = "ChiSquare"></A>
<P><DT><B>Chi-Square</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a
<A HREF = "#GoodnessOfFit">goodness of fit</A> test used for
comparing two
<A HREF = "#Distribution">distributions</A>. Mainly used on
<A HREF = "#Nominal">nominal</A> and
<A HREF = "#Ordinal">ordinal</A> measurements. Also see:
<A HREF = "#KolmogorovSmirnov">Kolmogorov-Smirnov</A>.
<P>In the usual case, many independent samples are counted by
category or separated into value-range "bins." The reference
distribution gives us the the number of values to expect in
each bin. Then we compute a X<SUP>2</SUP> test
<A HREF = "#Statistic">statistic</A> related to the difference
between the distributions:
<PRE>
X<SUP>2</SUP> = SUM( SQR(Observed[i] - Expected[i]) / Expected[i] )
</PRE>
<P>("SQR" is the squaring function, and we require that each
expectation not be zero.) Then we use a
tabulation of chi-square statistic values to look up the probability
that a particular X<SUP>2</SUP> value or lower (in the
<A HREF = "#cdf">c.d.f.</A>) would occur by random sampling if both
distributions were the same. The statistic also depends upon the
"<A HREF = "#DegreesOfFreedom">degrees of freedom</A>," which is
almost always one less than the final number of bins. See the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/NORMCHIK.HTM#ChiSquare">chi-square</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<P>The <A HREF = "#cdf">c.d.f.</A> percentage for a particular
chi-square value is the area of the statistic distribution to the
left of the statistic value; this is the probability of obtaining
that statistic value <I>or less</I> by random selection when testing
two distributions which are exactly the same. Repeated trials which
randomly sample two identical distributions should produce about the
same number of X<SUP>2</SUP> values in each quarter of the distribution
(0% to 25%, 25% to 50%, 50% to 75%, and 75% to 100%). So if we
repeatedly find only very high percentage values, we can assume that
we are probing different distributions. And even a single very high
percentage value would be a matter of some interest.
<P>Any statistic probability can be expressed either as the
proportion of the area to the <I>left</I> of the statistic value
(this is the "cumulative distribution function" or c.d.f.), or as
the area to the <I>right</I> of the value (this is the "upper tail").
Using the upper tail representation for the X<SUP>2</SUP> distribution
can make sense because the usual chi-squared test is a "one tail" test
where the decision is always made on the upper tail. But the
"upper tail" has an opposite "sense" to the c.d.f., where higher
statistic values always produce higher percentage values.
Personally, I find it helpful to describe all statistics by their
c.d.f., thus avoiding the use of a wrong "polarity" when interpreting
any particular statistic. While it is easy enough to convert from
the c.d.f. to the complement or vise versa (just subtract from 1.0),
we can base our arguments on either form, since the statistical
implications are the same.
<P>It is often unnecessary to use a statistical test if we just want
to know whether a function is producing something like the expected
distribution: We can <I>look</I> at the binned values and
generally get a good idea about whether the distributions change in
similar ways at similar places. A good rule-of-thumb is to expect
chi-square totals similar to the number of bins, but distinctly
different distributions often produce huge totals far beyond the
values in any table, and computing an exact probability for such
cases is simply irrelevant. On the other hand, it can be very
useful to perform 20 to 40 independent experiments to look for a
reasonable statistic distribution, rather than simply making a
"yes / no" decision on the basis of what might turn out to be a
rather unusual result.
<P>Since we are accumulating <I>discrete</I> bin-counts, any
fractional expectation will always differ from any actual count.
For example, suppose we expect an
<A HREF = "#UniformDistribution">even distribution</A>, but have many
bins and so only accumulate enough samples to observe about 1 count
for every 2 bins. In this situation, the absolute best sample
we could hope to see would be something like (0,1,0,1,0,1,...),
which would represent an even, balanced distribution over the range.
But even in this best possible case we would still be off by half
a count in each and every bin, so the chi-square result would not
properly characterize this best possible sequence. Accordingly, we
need to accumulate enough samples so that the quantization which
occurs in binning does not appreciably affect the accuracy of the
result. Normally I try to expect at least 10 counts in each bin.
<P>But when we have a reference distribution that trails off toward
zero, <I>inevitably</I> there will be some bins with few counts.
Taking more samples will just expand the range of bins, some of which
will be lightly filled in any case. We can avoid quantization error
by summing both the observations and expectations from multiple bins,
until we get a reasonable expectation value (again, I like to see 10
counts or more). In this way, the "tails" of the distribution can
be more properly (and legitimately) characterized.
<A NAME = "Cipher"></A>
<P><DT><B>Cipher</B>
<DD>In general, a
<A HREF = "#Key">key</A>-selected secret transformation between
<A HREF = "#Plaintext">plaintext</A> and
<A HREF = "#Ciphertext">ciphertext</A>.
Specifically, a secrecy
<A HREF = "#Mechanism">mechanism</A> or process which operates on
individual characters or
<A HREF = "#Bit">bits</A> independent of semantic content.
As opposed to a secret
<A HREF = "#Code">code</A>, which generally operates on words,
phrases or sentences, each of which may carry some amount of
complete meaning. Also see:
<A HREF = "#Cryptography">cryptography</A>,
<A HREF = "#BlockCipher">block cipher</A>,
<A HREF = "#StreamCipher">stream cipher</A>,
<A HREF = "#CipherTaxonomy"><NOBR>a cipher taxonomy</NOBR></A>, and
<A HREF = "#Substitution">substitution</A>.
<P>A good cipher can transform secret information into a multitude
of different intermediate forms, each of which represents the original
information. <I>Any</I> of these intermediate forms or ciphertexts
can be produced by ciphering the information under a particular key
value. The intent is that the original information only be exposed
by <I>one</I> of the many possible keyed interpretations of that
ciphertext. Yet the correct interpretation is available merely by
deciphering under the appropriate key.
<P>A cipher appears to reduce the protection of secret information
to enciphering under some key, and then keeping that key secret.
This is a great reduction of effort and potential exposure, and is
much like keeping your valuables in your house, and then locking
the door when you leave. But there are also similar limitations
and potential problems.
<P>With a good cipher, the resulting ciphertext can be stored or
transmitted otherwise exposed without also exposing the secret
information hidden inside. This means that ciphertext can be stored
in, or transmitted through, systems which have no secrecy protection.
For transmitted information, this also means that the cipher itself
must be distributed in multiple places, so in general the cipher
cannot be assumed to be secret. With a good cipher, only the
deciphering key need be kept secret.
<A NAME = "CipherTaxonomy"></A>
<P><DT><B>A Cipher Taxonomy</B>
<DD>For the analysis of cipher operation it is useful to collect
ciphers into groups based on their functioning (or <I>intended</I>
functioning). The goal is to group ciphers which are <I>essentially
similar,</I> so that as we gain an understanding of one cipher, we
can apply that understanding to others in the same group. We thus
classify <I>not</I> by the
<A HREF = "#Component">components</A> which make up the cipher, but
instead on the "black-box" operation of the cipher itself.
<P>We seek to hide distinctions of size, because <I>operation</I>
is independent of size, and because size effects are usually
straightforward. We thus classify serious
<A HREF = "#BlockCipher">block ciphers</A> as
<A HREF = "#Key">keyed</A>
<A HREF = "#SimpleSubstitution">simple substitution</A>, just like
newspaper amusement ciphers, despite their obvious differences in
strength and construction. This allows us to compare the results
from an ideal tiny cipher to those from a large cipher construction;
the grouping thus can provide <I>benchmark</I> characteristics for
measuring large cipher constructions.
<P>We <I>could</I> of course treat each cipher as an entity unto
itself, or relate ciphers by their dates of discovery, the tree of
developments which produced them, or by known strength. But each of
these criteria is more or less limited to telling us "this cipher is
what it is." We already know that. What we <I>want</I> to know is
what other ciphers function in a similar way, and then whatever is
known about <I>those</I> ciphers. In this way, every cipher need
not be an island unto itself, but instead can be judged and compared
in a related community of similar techniques.
<P>Our primary distinction is between ciphers which handle all the
data at once
(<A HREF = "#BlockCipher">block ciphers</A>), and those which handle
some, then some more, then some more
(<A HREF = "#StreamCipher">stream ciphers</A>). We thus see the
usual repeated use of a block cipher as a stream <I>meta-cipher</I>
which has the block cipher as a component.
It is also possible for a stream cipher to be re-keyed or re-originate
frequently, and so appear to operate on "blocks." Such a cipher,
however, would not have the
<A HREF = "#OverallDiffusion">overall diffusion</A> we normally
associate with a block cipher, and so might usefully be regarded as
a stream meta-cipher with a stream cipher component.
<P>The goal is not to give each cipher a label, but instead to seek
insight. Each cipher in a particular general class carries with it
the consequences of that class. And because these groupings ignore
size, we are free to generalize from the small to the large and so
predict effects which may be unnoticed in full-size ciphers. <P>
<OL TYPE = A>
<BIG><B>
<LI><A HREF = "#BlockCipher">BLOCK CIPHER</B></BIG></A>
<BR>A block cipher <I>requires</I> the accumulation of some amount
of data or multiple data elements for ciphering to complete.
(Sometimes stream ciphers accumulate data for convenience, as
in cylinder ciphers, which nevertheless logically cipher each
character independently.)
<P>(Note that this definition is somewhat
broader than the now common understanding of a huge, and thus
<I>emulated,</I> Simple Substitution. But there are ciphers
which require blocked plaintext and which do <I>not</I> emulate
Simple Substitution, and calling these something other than
"block" ciphers negates the advantage of a taxonomy.)
<P><OL TYPE = 1>
<B><LI><A HREF = "#Substitution">SUBSTITUTION</A> CIPHER</B>
<UL>
<LI>A "codebook" or "simple substitution."
<LI>Each code value becomes a distinguishable element.
Thus, substitution generally converts a collection of
independent elements to a single related unit.
<LI>Keying constitutes a <A HREF = "#Permutation">permutation</A>
or re-arrangement of the fixed set of possible
<A HREF = "#Code">code</A> values.
<LI><A HREF = "#Avalanche">Avalanche</A> or data
<A HREF = "#Diffusion">diffusion</A> is a natural
consequence of an arbitrary selection among all possible
code values.
<LI>The usual complete binary substitution distributes
bit-changes between code values binomially, and this
effect can be sampled and examined statistically.
<LI>Avalanche is two-way diffusion in the sense that "later"
plaintext can change "earlier" ciphertext.
<LI>A conventional block cipher is built from small components
with a design intended to <I>simulate</I> a substitution
table of a size vastly larger than anything which could be
practically realized.
</UL>
<P><OL TYPE = a>
<B><LI><A HREF = "#Transposition">Transposition</A> Cipher</B>
<UL>
<LI>Clearly, it is necessary for all message elements
which will be transposed to be collected before
operations begin; this is the block cipher signature.
<LI>Any possible transposition is necessarily a subset
of an arbitrary substitution; thus, transposition can
be seen as a particular keying subset of substitution.
<LI>Notice, however, that the usual avalanche signature
of substitution is not present, and of course the
actual data values are not changed at all by
transposition, just moved about.
<LI>Also notice that we are close to using the idea of
permutation in two very different ways: first as a
particular n-bit to n-bit substitution, and second as
a particular re-arrangement of characters in the block.
These have wildly different ciphering effects.
</UL>
</OL>
</OL>
<P><BIG><B>
<LI><A HREF = "#StreamCipher">STREAM CIPHER</A></B></BIG>
<UL>
<LI>A stream cipher does <I>not</I> need to accumulate some amount
of data or multiple data elements for ciphering to complete.
(Since we define only two main "types" of cipher, a stream cipher
is the opposite of a block cipher and vise versa. It is
extremely important that the definitions for block and stream
ciphering enclose the universe of all possible ciphers.)
<LI>A stream cipher has the ability to transform individual
elements one-by-one. The actual transformation usually is
a block transformation, and may be repeated with the same
or different keying.
<LI>In a stream cipher, data diffusion may or may not occur, but
if it does, it is necessarily one-way (from earlier to
later elements).
<LI>Since elements are ciphered one-by-one, changing part of
the plaintext can affect that part and possibly <I>later</I>
parts of the ciphertext; this is a stream cipher signature.
<LI>The simple re-use of a block transformation to cover more
data than a single block is a stream operation.
</UL>
<P><OL TYPE = 1>
<B><LI><A HREF = "#ConfusionSequence">CONFUSION SEQUENCE</A></B>
<UL>
<LI>With a truly random sequence, used once, we have a
<A HREF = "#OneTimePad">one time pad</A>.
<LI>With a pseudorandom confusion sequence and a simple
additive combiner, we have a Vernam cipher.
<LI>A simple additive transformation becomes weak upon the
second character ciphered, or immediately, under
<A HREF = "#KnownPlaintextAttack">known plaintext</A>,
making strength dependent on the confusion sequence.
<LI>More complex transformations imply the need for
correspondingly less strong confusion sequences.
</UL>
<P><OL TYPE = a>
<B><LI>Autokey</B>
<UL>
<LI>Normally the use of ciphertext, but also perhaps
plaintext, as the cipher key.
<LI>Can create a random-like confusion stream which will
re-synchronize after ciphertext data loss.
<LI>Under known-plaintext, the common "ciphertext feedback"
version exposes both the confusion sequence and the
input which creates that sequence. This is a lot of
pressure on a single transformation.
</UL>
</OL>
<P>
<B><LI><A HREF = "#MonoalphabeticSubstitution">MONOALPHABETIC</A></B>
(e.g., <A HREF = "#DES">DES</A> <A HREF = "#CBC">CBC</A>)
<UL>
<LI>The repeated use of a single fixed substitution.
<LI>A conventional block cipher <I>simulates</I> a large
substitution.
<LI>A substitution becomes weak when its code values are
re-used.
<LI>Code value re-use can be minimized by randomizing the
plaintext block (e.g., CBC). This distributes the
plaintext evenly across the possible block values, but
at some point the transformation itself must change or
be exposed.
<LI>Another alternative is to use a very large block so that
code value re-use is made exceedingly unlikely. A large
block also has room for a dynamic keying field which
would make code value re-use even more unlikely.
</UL>
<P>
<B><LI><A HREF = "#PolyalphabeticSubstitution">POLYALPHABETIC</A></B>
<UL>
<LI>The use of multiple fixed substitutions.
<LI>By itself, the use of multiple alphabets in a regular
sequence is inherently not much stronger than just a
single alphabet.
<LI>It is of course possible to select an alphabet or
transformation at pseudo-random, for example by
re-keying DES after every block ciphered. This brings
back sequence strength as an issue, and opens up the
sequence generator starting
<A HREF = "#State">state</A> as an
<A HREF = "#IV">IV</A>.
<LI>A related possibility is the use of a
<A HREF = "#LatinSquareCombiner">Latin square combiner</A>
which effectively selects among a balanced set of
different fixed substitution alphabets.
</UL>
<P><OL TYPE = a>
<B><LI>Cylinder</B>
<UL>
<LI>A cipher which has or simulates the use of a number
of different alphabet disks on a common rod.
<LI>Primary keying is the arrangement of the alphabet around
each disk, and the selection and arrangement of disks.
<LI>By entering the plaintext on one row, any of n-1 other
rows can be sent as ciphertext; this selection is an
<A HREF = "#IV">IV</A>.
<LI>If the plaintext data are redundant, it is possible to
avoid sending the IV by selecting the one of n-1
possible decipherings which shows redundancy. But this
is not generally possible when ciphering arbitrary
binary data.
<LI>If an IV is selected first, each character ciphering in
that "chunk" is independent of each other ciphering.
There is no data <A HREF = "#Diffusion">diffusion</A>.
<LI>In general, each disk is used at fixed periodic
intervals through the text, which is weak.
<LI>The ciphertext selection is
<A HREF = "#Homophonic">homophonic</A>, in the sense
that different ciphertext rows each represent exactly
the same plaintext.
<LI>Cylinder operation is <B>not</B>
<A HREF = "#Polyphonic">polyphonic</A> in the usual
sense: While a single ciphertext <I>can</I> imply any
other row is plaintext, generally only one row has a
reasonable plaintext meaning.
</UL>
</OL>
<P>
<B><LI><A HREF = "#DynamicSubstitutionCombiner">DYNAMIC</A></B>
<UL>
<LI>The use of one (monoalphabetic) or multiple
(polyalphabetic) substitutions which <I>change</I> during
ciphering.
</UL>
<P>
<B><LI>ITERATIVE</B>
<UL>
<LI>The iterative re-use of a stream cipher with a new random
<A HREF = "#IV">IV</A> on each iteration so as to eventually
achieve the effect of a
<A HREF = "#MessageKey">message key</A>.
<LI>Each iteration seemingly must expand the ciphertext by
the size of the IV, although this is probably about the
same expansion we would have with a message key.
<LI>Unfortunately, each iteration will take some time.
</UL>
</OL>
</OL>
<A NAME = "Ciphering"></A>
<P><DT><B>Ciphering</B>
<DD>The use of a
<A HREF = "#Cipher">cipher</A>.
The general term which includes both
<A HREF = "#Encipher">enciphering</A> and
<A HREF = "#Decipher">deciphering</A>.
<A NAME = "Ciphertext"></A>
<P><DT><B>Ciphertext</B>
<DD>The result of
<A HREF = "#Encipher">enciphering</A>. Ciphertext will contain the
same information as the original
<A HREF = "#Plaintext">plaintext</A>, but hide the original
information, typically under the control of a
<A HREF = "#Key">key</A>. Without the
key it should be impractical to recover the original information
from the ciphertext.
<A NAME = "CiphertextExpansion"></A>
<P><DT><B>Ciphertext Expansion</B>
<DD>When the
<A HREF = "#Ciphertext">ciphertext</A> is larger than the original
<A HREF = "#Plaintext">plaintext</A>.
<P>Ciphertext expansion is the general situation:
<A HREF = "#StreamCipher">Stream ciphers</A> need a
<A HREF = "#MessageKey">message key</A>, and
<A HREF = "#BlockCipher">block ciphers</A> with a small block
need some form of plaintext randomization, which generally
needs an
<A HREF = "#IV">IV</A> to protect the first block. Only block
ciphers with a large size block generally can avoid ciphertext
expansion, and then only if each block can be expected to hold
sufficient uniqueness or "entropy" to prevent a
<A HREF = "#CodebookAttack">codebook attack</A>.
<P>It is certainly true that in most situations of new construction
a few extra bytes are not going to be a problem. However, in some
situations, and especially when a cipher is to be installed into
an existing system, the ability to encipher data <I>without</I>
requiring additional storage can be a big advantage. Ciphering
data without expansion supports the ciphering of data structures
which have been defined and fixed by the rest of the system,
provided only that one can place the cipher at the interface
"between" two parts of the system. This is also especially
efficient, as it avoids the process of acquiring a different,
larger, amount of store for each ciphering. Such an installation
also can apply to the entire system, and not require the
re-engineering of all applications to support cryptography in
each one.
<A NAME = "Ciphony"></A>
<P><DT><B>Ciphony</B>
<DD>Audio or voice
<A HREF = "#Encryption">encryption</A>. A contraction of "ciphered
telephony."
<A NAME = "Circuit"></A>
<P><DT><B>Circuit</B>
<DD>The "circular" flow of electrons from a power source, through
<A HREF = "#Conductor">conductors</A> and
<A HREF = "#Component">components</A> and back to the power source.
Or the arrangement of components which allows such flow and
performs some function.
<A NAME = "Clock"></A>
<P><DT><B>Clock</B>
<DD>A repetitive or cyclic timing signal to coordinate
<A HREF = "#State">state</A> changes in a
<A HREF = "#Digital">digital</A> system. A clock can coordinate
the movement of data and results through various stages of
processing. Although a clock signal is digital, the source of the
repetitive signal is almost always an
<A HREF = "#Analog">analog</A>
<A HREF = "#Circuit">circuit</A>.
<P>In an analog system we might produce a known delay by slowly
charging a
<A HREF = "#Capacitor">capacitor</A> and measuring the
<A HREF = "#Voltage">voltage</A>
across it continuously until the voltage reaches the desired level.
A big problem with this is that the
<A HREF = "#Circuit">circuit</A> becomes increasingly susceptible
to noise at the end of the interval.
<P>In a digital system we create a delay by simply counting
clock cycles. Since all external operations are digital, noise
effects are virtually eliminated, and we can easily create
accurate delays which are as long as the count in any counter
we can build.
<A NAME = "Closed"></A>
<P><DT><B>Closed</B>
<DD>An operation on a
<A HREF = "#Set">set</A> which produces only elements in that set.
<A NAME = "Code"></A>
<P><DT><B>Code</B>
<DD>Symbols or values which stand for symbols, values, sequences,
or even operations (as in
<A HREF = "#Computer">computer</A>
"<A HREF = "#Opcode">opcodes</A>"). As opposed to a
<A HREF = "#Cipher">cipher</A>, which operates only on individual
characters or
<A HREF = "#Bit">bits</A>, classically, codes also represent words,
phrases, and entire sentences.
One application was to decrease the cost of telegraph messages.
In modern usage, a code is often simply a correspondence between
information (such as character symbols) and values (such as the
<A HREF = "#ASCII">ASCII</A> code or
<A HREF = "#Base64">Base-64</A>), although computer opcodes do have
independent meanings and variable lengths.
<P>Coding is a very basic part of modern computation and generally
implies no
<A HREF = "#Secrecy">secrecy</A> or information hiding. Some codes
are "secret codes," however, and then the transformation between the
information and the coding is kept secret. Also see:
<A HREF = "#Cryptography">cryptography</A> and
<A HREF = "#Substitution">substitution</A>.
<A NAME = "Codebook"></A>
<P><DT><B>Codebook</B>
<DD>Literally, the listing or "book" of
<A HREF = "#Code">code</A>
transformations. More generally, any collection of such
transformations. Classically, letters, common words and useful
phrases were numbered in a codebook; messages transformed into
those numbers were "coded messages." Also see
<A HREF = "#Nomenclator">nomenclator</A>.
A "codebook style cipher" refers to a
<A HREF = "#BlockCipher">block cipher</A>.
<A NAME = "CodebookAttack"></A>
<P><DT><B>Codebook Attack</B>
<DD>A form of
<A HREF = "#Attack">attack</A> in which The
<A HREF = "#Opponent">Opponent</A> simply tries to build or collect a
<A HREF = "#Codebook">codebook</A> of all the possible transformations
between
<A HREF = "#Plaintext">plaintext</A> and
<A HREF = "#Ciphertext">ciphertext</A> under a single
<A HREF = "#Key">key</A>. This is the classic approach we
normally think of as "codebreaking."
<P>The usual ciphertext-only approach depends upon the plaintext
having strong statistical biases which make some values far more
probable than others, and also more probable in the context of
particular preceding known values. Such attacks can be defeated if
the plaintext data are randomized and thus evenly and independently
distributed among the possible values. (This may have been the
motivation for the use of a
<A HREF = "#Random">random</A>
<A HREF = "#ConfusionSequence">confusion sequence</A> in a
<A HREF = "#StreamCipher">stream cipher</A>.)
<P>When a codebook attack is possible on a
<A HREF = "#BlockCipher">block cipher</A>, the complexity of the
attack is controlled by the size of the block (that is, the number
of elements in the codebook) and not the
<A HREF = "#Strength">strength</A> of the cipher.
This means that a codebook attack would be equally effective
against either
<A HREF = "#DES">DES</A> or
<A HREF = "#TripleDES">Triple-DES</A>.
<P>One way a block cipher can avoid a codebook attack is by having
a large
<A HREF = "#Block">block</A> size which will contain an unsearchable
amount of plaintext "uniqueness" or
<A HREF = "#Entropy">entropy</A>. Another approach is to randomize the
plaintext block, often by using an
<A HREF = "#OperatingMode">operating mode</A> such as
<A HREF = "#CBC">CBC</A>.
<A NAME = "Combination"></A>
<P><DT><B>Combination</B>
<DD>The mathematical term for any particular subset of symbols,
independent of order. (Also called the binomial coefficient.)
The number of combinations of <I>n</I> things, taken <I>k</I>
at a time, read "<I>n</I> choose <I>k</I>" is:
<PRE>
n
( ) = C(n,k) = n! / (k! (n-k)!)
k
</PRE>
<P>See the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Combinations">combinations</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages. Also see
<A HREF = "#Permutation">permutation</A>.
<A NAME = "Combinatoric"></A>
<P><DT><B>Combinatoric</B>
<DD>Combinatorics is a branch of mathematics, like analysis
or number theory. Combinatorics is often related to counting
the subsets of finite sets. One result is to help us to
understand the probability of a particular subset in the
universe of possible values.
<P>Consider a
<A HREF = "#BlockCipher">block cipher</A>:
For any given size block, there
is some fixed number of possible messages. Since every
enciphering must be reversible (deciphering must work), we
have a 1:1 mapping between
<A HREF = "#Plaintext">plaintext</A> and
<A HREF = "#Ciphertext">ciphertext</A> blocks.
The set of all plaintext values and the set of all ciphertext
values is the same set; particular values just have different
meanings in each set.
<P><A HREF = "#Key">Keying</A> gives us no more ciphertext values,
it only re-uses
the values which are available. Thus, keying a block cipher
consists of selecting a particular arrangement or
<A HREF = "#Permutation">permutation</A>
of the possible block values. Permutations are a combinatoric
topic. Using combinatorics we can talk about the number of
possible permutations or keys in a block cipher, or in cipher
components like substitution tables.
<P>Permutations can be thought of as the number of unique
arrangements of a given length on a particular set. Other
combinatoric concepts include
<A HREF = "#BinomialDistribution">binomials</A>
and
<A HREF = "#Combination">combinations</A>
(the number of unique given-length subsets of a given set).
<A NAME = "Combiner"></A>
<P><DT><B>Combiner</B>
<DD>In a cryptographic context, a combiner is a
<A HREF = "#Mechanism">mechanism</A> which
<A HREF = "#Mixing">mixes</A> two data sources into a single result.
A "combiner style cipher" refers to a
<A HREF = "#StreamCipher">stream cipher</A>.
<P><I>Reversible</I> combiners are used to
<A HREF = "#Encipher">encipher</A>
<A HREF = "#Plaintext">plaintext</A> into
<A HREF = "#Ciphertext">ciphertext</A> in a
<A HREF = "StreamCipher">stream cipher</A>. The ciphertext is then
<A HREF = "#Decipher">deciphered</A> into plaintext using a related
inverse or
<A HREF = "#Extractor">extractor</A> mechanism.
<P><I>Irreversible</I> or non-invertible combiners are often used to
mix multiple
<A HREF = "#RNG">RNG's</A> into a single
<A HREF = "#ConfusionSequence">confusion sequence</A>, also for
use in stream cipher designs.
<P>Also see
<A HREF = "#BalancedCombiner">balanced combiner</A>,
<A HREF = "#AdditiveCombiner">additive combiner</A> and
<A HREF = "#Complete">complete</A>, and
<A HREF = "http://www.io.com/~ritter/RES/COMBCORR.HTM">The Story
of Combiner Correlation: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page.
<A NAME = "Commutative"></A>
<P><DT><B>Commutative</B>
<DD>A
<A HREF = "#Dyadic">dyadic</A> operation in which exchanging the
two argument values must produce the same result:
<NOBR>a + b = b + a.</NOBR>
<P>Also see:
<A HREF = "#Associative">associative</A> and
<A HREF = "#Distributive">distributive</A>.
<A NAME = "Complete"></A>
<P><DT><B>Complete</B>
<DD>A term used in
<A HREF = "#S-Box">S-box</A> analysis to describe a property of
the value arrangement in an invertible
<A HREF = "#Substitution">substitution</A> or, equivalently, a
<A HREF = "#BlockCipher">block cipher</A>.
If we have some input value, and then change one bit in that
value, we expect about half the output bits to change; this is
the result of
<A HREF = "#Diffusion">diffusion</A>; when partial diffusion is
repeated we develop
<A HREF = "#Avalanche">avalanche</A>; and the ultimate result is
<A HREF = "#StrictAvalancheCriterion">strict avalanche</A>.
<I>Completeness</I> tightens this concept and requires that changing
a particular input bit produce a change in a particular output bit,
at some point in the transformation (that is, for at least one input
value). Completeness requires that this relationship occur at least
once for <I>every</I> combination of input bit and output bit.
It is tempting to generalize the definition to apply to multi-bit
element values, where this makes more sense.
<P>Completeness does <I>not</I> require that an input bit change
an output bit for <I>every</I> input value (which would not make
sense anyway, since <I>every</I> output bit must be changed at
<I>some</I> point, and if they all had to change at <I>every</I>
point, we would have <I>all</I> the output bits changing, instead
of the desired half). The inverse of a complete function is not
necessarily also complete.
<P>As originally defined in Kam and Davida:
<BLOCKQUOTE>
"For every possible key value, every output bit
<I>c<SUB>i</SUB></I> of the SP network depends upon all input
bits <I>p<SUB>1</SUB>,...,p<SUB>n</SUB></I> and not just a
proper subset of the input bits." [p.748]
</BLOCKQUOTE>
Kam, J. and G. Davida. 1979.
Structured Design of Substitution-Permutation Encryption Networks.
<I>IEEE Transactions on Computers.</I>
C-28(10): 747-753.
<A NAME = "Component"></A>
<P><DT><B>Component</B>
<DD>A part of a larger construction; a building-block in an overall
design or
<A HREF = "#System">system</A>. Modern
<A HREF = "#Digital">digital</A> design is based on the use of a few
general classes of pre-defined, fully-specified parts. Since even
digital logic can use or even require
<A HREF = "#Analog">analog</A> values internally, by enclosing these
values the logic component can hide complexity and present the
appearance of a fully digital device.
<P>The most successful components are extremely general and can be
used in many different ways. Even as a brick is independent of the
infinite variety of brick buildings, a
<A HREF = "#FlipFlop">flip-flop</A> is independent of the infinite
variety of logic machines which use flip-flops.
<P>The source of the ability to design and build a wide variety of
different electronic logic machines is the ability to interconnect
and use a few very basic but very general parts.
<P><A HREF = "#Electronic">Electronic</A>
components include
<UL>
<LI>passive components like
<A HREF = "#Resistor">resistors</A>,
<A HREF = "#Capacitor">capacitors</A>, and
<A HREF = "#Inductor">inductors</A>;
<LI>active components like
<A HREF = "#Transistor">transistors</A> and even
<A HREF = "#Relay">relays</A>, and
<LI>whole varieties of active electronic logic devices,
including
<A HREF = "#FlipFlop">flip-flops</A>,
<A HREF = "#ShiftRegister">shift registers</A>, and
<A HREF = "#State">state</A> storage, or memory.
</UL>
<P>Cryptographic system components include:
<UL>
<LI>Nonlinear transformations, such as
<A HREF = "#S-Box">S-boxes</A> /
<A HREF = "#SubstitutionTable">substitution tables</A>,
<LI><A HREF = "#Key">key</A>
<A HREF = "#Hash">hashing</A>, such as
<A HREF = "#CRC">CRC</A>,
<LI><A HREF = "#RandomNumberGenerator">random number generators</A>, such as
<A HREF = "#AdditiveRNG">additive RNG's</A>,
<LI>sequence isolators such as
<A HREF = "#Jitterizer">jitterizers</A>,
<LI><A HREF = "#Combiner">combiners</A>, such as
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution</A>,
<A HREF = "#LatinSquareCombiner">Latin squares</A>, and
<A HREF = "#ExclusiveOR">exclusive-OR</A>,
<LI><A HREF = "#Mixing">mixers</A>, such as
<A HREF = "#BalancedBlockMixer">Balanced Block Mixers</A>, or
<A HREF = "#OrthogonalLatinSquares">orthogonal Latin squares</A>.
</UL>
<A NAME = "Computer"></A>
<P><DT><B>Computer</B>
<DD>Originally the job title for a person who performed a laborious
sequence of arithmetic computations. Now a machine for performing
such calculations.
<P>A logic machine with:
<P><OL>
<LI>Some limited set of fundamental computations. Typical operations
include simple arithmetic and
<A HREF = "#BooleanLogic">Boolean logic</A>. Each operation is
selected by a particular operation code value or
"<A HREF = "#Opcode">opcode</A>." This is a
<A HREF = "#Hardware">hardware</A> interpretation of the opcode.
<P><LI>The ability to follow a list of instructions or commands,
performing each in sequence. Thus capable of simulating a wide
variety of far more complex "instructions."
<P><LI>The ability to execute or perform at least some instructions
conditionally, based on parameter values or intermediate results.
<P><LI>The ability to store values into a numbered "address space"
which is far larger than the instruction set, and later to recover
those values when desired.
</OL>
<P>Also see:
<A HREF = "#SourceCode">source code</A>,
<A HREF = "#ObjectCode">object code</A> and
<A HREF = "#Software">software</A>.
<A NAME = "Conductor"></A>
<P><DT><B>Conductor</B>
<DD>A material in which electron flow occurs easily. Typically a
metal; usually copper, sometimes silver, brass or even aluminum.
A
<A HREF = "#Wire">wire</A>. As opposed to an
<A HREF = "#Insulator">insulator</A>.
<A NAME = "Confusion"></A>
<P><DT><B>Confusion</B>
<DD>Those parts of a
<A HREF = "#Cipher">cipher</A>
<A HREF = "#Mechanism">mechanism</A> which change the
correspondence between input values and output values. In
contrast to
<A HREF = "#Diffusion">diffusion</A>.
<A NAME = "ConfusionSequence"></A>
<P><DT><B>Confusion Sequence</B>
<DD>The sequence combined with data in a
<A HREF = "#StreamCipher">stream cipher</A>. Normally produced
by a
<A HREF = "#RandomNumberGenerator">random number generator</A>,
it is also called a "running key."
<A NAME = "Contextual"></A>
<P><DT><B>Contextual</B>
<DD>In the study of
<A HREF = "#Logic">logic</A>, an observed fact dependent upon other
facts <I>not</I> being observed. Or a statement which is
conditionally true, provided other unmentioned conditions have the
appropriate
<A HREF = "#State">state</A>. As opposed to
<A HREF = "#Absolute">absolute</A>.
<A NAME = "ConventionalCipher"></A>
<P><DT><B>Conventional Cipher</B>
<DD>A
<A HREF = "#SecretKeyCipher">secret key cipher</A>.
<A NAME = "Congruence"></A>
<P><DT><B>Congruence</B>
<DD>Casually speaking, the remainder after a division of
<A HREF = "#Integer">integers</A>.
<P>In number theory we say than integer a (exactly) <I>divides</I>
integer b (denoted <NOBR>a | b</NOBR>) if and only if there is an
integer k such that <NOBR>ak = b.</NOBR>
<P>In number theory we say that integer a is <I>congruent</I> to
integer b
<A HREF = "#Modulo"><I>modulo</I></A> m, denoted
<NOBR>a = b (mod m),</NOBR>
if and only if <NOBR>m | (a - b).</NOBR> Here m is the divisor
or <I>modulus.</I>
<A NAME = "Convolution"></A>
<P><DT><B>Convolution</B>
<DD><A HREF = "#Polynomial">Polynomial</A> multiplication.
A multiplication of each term against each other term, with no
"carries" from term to term. Also see
<A HREF = "#Correlation">correlation</A>.
<P>Used in the analysis of signal processing to develop the response
of a processing system to a complicated real-valued input signal.
The input signal is first separated into some number of discrete
impulses. Then the system response to an impulse -- the output level
at each unit time delay after the impulse -- is determined. Finally,
the expected response is computed as the sum of the contributions
from each input impulse, multiplied by the magnitude of each impulse.
This is an approximation to the convolution integral with an infinite
number of infinitesimal delays. Although originally accomplished
graphically, the process is just polynomial multiplication.
<P>It is apparently possible to compute the convolution of two
sequences by taking the
<A HREF = "#FFT">FFT</A> of each, multiplying these results
term-by-term, then taking the inverse FFT. While there is an
analogous relationship in the
<A HREF = "#FWT">FWT</A>, in this case the "delays" between the
sequences represent
<A HREF = "#Mod2">mod 2</A> distance differences, which may or may
not be useful.
<A NAME = "Correlation"></A>
<P><DT><B>Correlation</B>
<DD>In general, the probability that two sequences of symbols
will, in any position, have the same symbol. We expect two
<A HREF = "#Random">random</A>
binary
sequences to have the same symbols about half the time.
<P>One way to evaluate the correlation of two real-valued sequences
is to multiply them together term-by-term and sum all results.
If we do this for all possible "delays" between the two sequences,
we get a "vector" or 1-dimensional array of correlations which is a
<A HREF = "#Convolution">convolution</A>. Then the maximum value
represents the delay with the best correlation.
<A NAME = "CorrelationCoefficient"></A>
<P><DT><B>Correlation Coefficient</B>
<DD>The value from -1 to +1 describing the
<A HREF = "#Correlation">correlation</A> of two binary sequences,
averaged over the length of interest.
Correlation coefficient values are related to the probability that,
given a symbol from one sequence, the other sequence will have that
same symbol. A value of:
<UL>
<LI>-1 implies a 0.0 probability (the second sequence is the
complement of the first),
<LI>0 implies a 0.5 probability (the sequences are
uncorrelated), and
<LI>+1 implies a 1.0 probability (the sequences are the same).
</UL>
<P>"The correlation coefficient associated with a pair of
<A HREF = "#BooleanFunction">Boolean functions</A>
<I>f(a)</I> and <I>g(a)</I> is denoted by C(f,g) and is given by
<BLOCKQUOTE><TT>
C(<I>f,g</I>) = 2 * prob(<I>f(a) = g(a)</I>) - 1 ."
</TT></BLOCKQUOTE>
<P>Daemen, J., R. Govaerts and J. Vanderwalle. 1994.
Correlation Matrices.
<I>Fast Software Encryption.</I>
276. Springer-Verlag.
<A NAME = "CRC"></A>
<P><DT><B>CRC</B>
<DD>Cyclic Redundancy Check: A fast error-check
<A HREF = "#Hash">hash</A> based on
<A HREF = "#Mod2Polynomial">mod 2 polynomial</A> operations.
<P>A CRC is essentially a fast remainder operation over
a huge numeric value which is the data. (For best speed, the
actual computation occurs as mod 2 polynomial operations.)
The CRC result is an excellent (but linear) hash value
corresponding to the data.
<P>No CRC has any appreciable
<A HREF = "#Strength">strength</A>,
but some applications -- even in cryptography -- <I>need</I> no
strength:
<UL>
<LI>One example is
<A HREF = "#Authentication">authentication</A>, provided the
linear CRC hash result is protected by a block cipher.
<LI>Another example is
<A HREF = "#Key">key</A> processing, where the uncertainty
in a User Key phrase of arbitrary size is collected into a
hash result of fixed size. In general, the hash result would
be just as good for The Opponent as the original key phrase,
so no strength shield could possibly improve the situation.
<LI>A third example is the accumulation of the uncertainty in
slightly uncertain
<A HREF = "#PhysicallyRandom">physically random</A> events.
When true randomness is accumulated, it is already as
unknowable as any strength shield could make it.
</UL>
<A NAME = "Cryptanalysis"></A>
<P><DT><B>Cryptanalysis</B>
<DD>That aspect of
<A HREF = "#Cryptology">cryptology</A> which concerns the
<A HREF = "#Strength">strength</A> analysis of a
<A HREF = "#Cryptography">cryptographic</A> system, and the
penetration or
<A HREF = "#Break">breaking</A> of a cryptographic system.
Also "codebreaking."
<P>Because there is no theory which guarantees strength for any
conventional cipher, ciphers traditionally have been considered
"strong" when they have been used for a long time with "nobody"
knowing how to break them easily. Cryptanalysis seeks to improve
this process by applying the known
<A HREF = "#Attack">attack</A> strategies to new
<A HREF = "#Cipher">ciphers</A>, and by actively seeking new ones.
It is normal to assume that at least
<A HREF = "#KnownPlaintextAttack">known-plaintext</A> is available;
often,
<A HREF = "#DefinedPlaintextAttack">defined-plaintext</A> is assumed.
The result is typically some value for the amount of "work" which will
achieve a "break" (even if that value is impractical); this is "the"
<A HREF = "#Strength">strength</A> of the cipher.
<P>But while cryptanalysis <I>can</I> prove "weakness" for a given
level of effort, cryptanalysis <I>cannot</I> prove that there is no
simpler attack:
<BLOCKQUOTE><BIG><B>Lack of proof of weakness is not proof of
strength.</B></BIG></BLOCKQUOTE>
<P>Indeed, when ciphers are used for real,
<A HREF = "#Opponent">The Opponents</A> can hardly be expected to
advertise a successful break, but will instead work hard to
reassure users that their ciphers are still secure. The fact that
<I>apparently</I> "nobody" knows how to break a cipher is somewhat
less reassuring from this viewpoint. In this context, using a wide
variety of different ciphers can make good sense: This reduces the
value of the information protected by any particular cipher, which
thus reduces the rewards from even a successful attack. Having a
numerous ciphers also requires The Opponents to field far greater
resources to identify, analyze, and automate breaking (when possible)
of each different cipher.
<P>Many academic attacks are essentially theoretical, involving huge
amounts of data and computation. But even when a direct technical
attack is <I>practical,</I> that may be the most difficult, expensive
and time-consuming way to obtain the desired information. Other
methods include making a paper copy, stealing a copy, bribery,
coercion, and electromagnetic monitoring. No cipher can keep secret
something which has been otherwise revealed. Information
<A HREF = "#Security">security</A> thus involves far more than just
<A HREF = "#Cryptography">cryptography</A>, and even a cryptographic
system is more than just a cipher. Even finding that information
has been revealed does not mean that a cipher has been broken.
<P>At one time it was reasonable to say: "Any cipher a man can
make, another man can break." However, with the advent of
serious
<A HREF = "#Computer">computer</A>-based cryptography, that
statement is no longer
valid, <I>provided</I> that every detail is properly handled.
This, of course, often turns out to not be the case.
<A NAME = "Cryptanalyst"></A>
<P><DT><B>Cryptanalyst</B>
<DD>Someone who
<A HREF = "#Attack">attacks</A>
<A HREF = "#Cipher">ciphers</A> with
<A HREF = "#Cryptanalysis">cryptanalysis</A>. A "codebreaker."
Often called the
<A HREF = "#Opponent">Opponent</A> by cryptographers, in
recognition of the (serious) game of thrust and parry between
these parties.
<A NAME = "Cryptographer"></A>
<P><DT><B>Cryptographer</B>
<DD>Someone who creates
<A HREF = "#Cipher">ciphers</A> using
<A HREF = "#Cryptography">cryptography</A>.
<A NAME = "CryptographicMechanism"></A>
<P><DT><B>Cryptographic Mechanism</B>
<DD>A process for enciphering and/or deciphering, or an
implementation (for example,
<A HREF = "#Hardware">hardware</A>,
<A HREF = "#Computer">computer</A>
<A HREF = "#Software">software</A>,
hybrid, or the like) for performing that process. See also
<A HREF = "#Cryptography">cryptography</A> and
<A HREF = "#Mechanism">mechanism</A>.
<A NAME = "Cryptography"></A>
<P><DT><B>Cryptography</B>
<DD>Greek for "hidden writing." The art and science of transforming
information into an intermediate form which
<A HREF = "#Security">secures</A> that information while in storage
or in transit. A part of
<A HREF = "#Cryptology">cryptology</A>, further divided into secret
<A HREF = "#Code">codes</A> and
<A HREF = "#Cipher">ciphers</A>.
As opposed to
<A HREF = "#Steganography">steganography</A>, which seeks to
hide the existence of any message, cryptography seeks to
render a message unintelligible <I>even when the message is
completely exposed</I>.
<P>Cryptography includes at least:
<UL>
<LI><A HREF = "#Secrecy">secrecy</A> (<I>confidentiality,</I> or
<I>privacy,</I> or <I>information security</I>) and
<LI><A HREF = "#Authentication">message authentication</A>
(<I>integrity</I>).
</UL>
Cryptography may also include:
<UL>
<LI><I>nonrepudiation</I> (the inability to deny sending a
message),
<LI><I>access control</I> (<I>user</I> or <I>source</I>
authentication), and
<LI><I>availability</I> (keeping security services available).
</UL>
<P>Modern cryptography generally depends upon translating a
message into one of an astronomical number of different
intermediate representations, or
<A HREF = "#Ciphertext">ciphertexts</A>, as selected by a
<A HREF = "#Key">key</A>. If all possible
intermediate representations have similar appearance, it may be
necessary to try all possible keys to find the one which
deciphers the message. By creating
<A HREF = "#Mechanism">mechanisms</A> with an
astronomical number of keys, we can make this approach
impractical.
<P>Cryptography may also be seen as a zero-sum game, where a
<A HREF = "#Cryptographer">cryptographer</A> competes against a
<A HREF = "#Cryptanalyst">cryptanalyst</A>. We might call
this the <A HREF = "#CryptographyWar">cryptography war</A>.
<A NAME = "CryptographyWar"></A>
<P><DT><B>Cryptography War</B>
<DD>
<A HREF = "#Cryptography">Cryptography</A> may be seen as a
dynamic <I>battle</I> between
<A HREF = "#Cryptographer">cryptographer</A> and
<A HREF = "#Cryptanalyst">cryptanalyst</A>. The cryptographer
tries to produce a
<A HREF = "#Cipher">cipher</A> which can retain
<A HREF = "#Secrecy">secrecy</A>. Then,
when it becomes worthwhile, one or more cryptanalysts try to
penetrate that secrecy by
<A HREF = "#Attack">attacking</A> the
cipher. Fortunately for the war, even after fifty years of
mathematical cryptology, not <I>one</I> practical cipher has
been accepted as <I>proven</I>
<A HREF = "#Security">secure</A> in practice. (See, for example, the
<A HREF = "#OneTimePad">one-time pad</A>.)
<P>Note that the successful cryptanalyst must keep good attacks
secret, or the opposing cryptographer will just produce a
<A HREF = "#Strength">stronger</A>
cipher. This means that the cryptographer is in the odd position
of never knowing whether his or her best cipher designs are
successful, or which side is winning.
<P>Cryptographers are often scientists who are trained to ignore
unsubstantiated claims. But there will <I>be</I> no substantiation
when a
<A HREF = "#Cipher">cipher</A>
<A HREF = "#System">system</A> is
<A HREF = "#Attack">attacked</A> and
<A HREF = "#Break">broken</A> for real, yet
continued use will endanger all messages so "protected." Thus,
it is a very reasonable policy to not adopt a widely-used cipher,
and to change ciphers periodically.
<A NAME = "Cryptology"></A>
<P><DT><B>Cryptology</B>
<DD>The field of study which generally includes
<A HREF = "#Steganography">steganography</A>,
<A HREF = "#Cryptography">cryptography</A> and
<A HREF = "#Cryptanalysis">cryptanalysis</A>.
<A NAME = "Current"></A>
<P><DT><B>Current</B>
<DD>The measure of electron flow, in amperes.
Current is analogous to the amount of water <I>flow,</I> as opposed
to <I>pressure</I> or
<A HREF = "#Voltage">voltage</A>.
A flowing electrical current will create a
<A HREF = "#MagneticField">magnetic field</A> around the
<A HREF = "#Conductor">conductor</A>.
A changing electrical current may create an
<A HREF = "#ElectromagneticField">electromagnetic field</A>.
<A NAME = "dB"></A>
<P><DT><HR><P><B>dB</B>
<DD><A HREF = "#Decibel">decibel</A>.
<A NAME = "DC"></A>
<P><DT><B>DC</B>
<DD>Direct
<A HREF = "#Current">Current</A>:
Electrical power which flows in one direction, more or less
constantly. As opposed to
<A HREF = "#AC">AC</A>.
<P>Most
<A HREF = "#Electronic">electronic</A> devices require DC -- at
least internally -- for proper operation, so a substantial part
of modern design is the "power supply" which converts 120 VAC wall
power into 12 VDC, 5 VDC and/or 3 VDC as needed by the
<A HREF = "#Circuit">circuit</A>
and active devices.
<A NAME = "Debug"></A>
<P><DT><B>Debug</B>
<DD>The interactive analytical process of correcting the design of a
complex
<A HREF = "#System">system</A>. A normal part of the development
process, although when
<A HREF = "#Bug">bugs</A> are not caught during development, they
can remain in production systems.
<P>Contrary to naive expectations, a complex system almost never
performs as desired when first realized. Both
<A HREF = "#Hardware">hardware</A> and
<A HREF = "#Software">software</A>
<A HREF = "#SystemDesign">system design</A> environments generally
deal with systems which are not working.
(When a system <I>really</I> works, the design and development
process is generally over.)
Debugging involves identifying problems, analyzing the source of
those problems, then changing the construction to fix the problem.
(Hopefully, the fix will not itself create new problems.)
This form of interactive analysis can be especially difficult because
the realized design may not actually be what is described in the
schematics, flow-charts, or other working documents: To some extent
the real system is unknown.
<P>When a system has many problems, the problems tend to interact,
which can make the identification of a particular cause very
difficult. This can be managed by "shrinking" the system: first
by partitioning the design into components and testing those
components, and then by temporarily disabling or removing sections
so as to identify the section in which the problem lies. Eventually,
with enough testing, partitioning and analysis, the source of any
problem can be identified. Some "problems,"
however, turn out to be the unexpected implications of a complex
design and are sometimes accepted as "features" rather than the
alternative of a complete design overhaul.
<A NAME = "Decipher"></A>
<P><DT><B>Decipher</B>
<DD>The process which can reveal the information
or <A HREF = "#Plaintext">plaintext</A> hidden in message
<A HREF = "#Ciphertext">ciphertext</A> (provided it is the
correct process, with the proper
<A HREF = "#Key">key</A>). The inverse of
<A HREF = "#Encipher">encipher</A>.
<A NAME = "Decryption"></A>
<P><DT><B>Decryption</B>
<DD>The general term for extracting information which was hidden
by <A HREF = "#Encryption">encryption</A>.
<A NAME = "DeductiveReasoning"></A>
<P><DT><B>Deductive Reasoning</B>
<DD>In the study of
<A HREF = "#Logic">logic</A>,
reasoning about a particular case from one or more general
statements; a proof. Also see:
<A HREF = "#InductiveReasoning">inductive reasoning</A> and
<A HREF = "#Fallacy">fallacy</A>.
<A NAME = "DefinedPlaintextAttack"></A>
<P><DT><B>Defined Plaintext Attack</B>
<DD>A form of
<A HREF = "#Attack">attack</A> in which the
<A HREF = "#Opponent">Opponent</A> can present arbitrary
<A HREF = "#Plaintext">plaintext</A> to be enciphered, and then
capture the resulting
<A HREF = "#Ciphertext">ciphertext</A>. The ultimate form of
<A HREF = "#KnownPlaintextAttack">known plaintext attack</A>.
<P>A defined plaintext attack can be a problem for systems which
allow unauthorized users to present arbitrary messages for
ciphering. Such attack can be made difficult by allowing only
authorized users to encipher data, by allowing only a few
messages to be enciphered between
<A HREF = "#Key">key</A> changes, by changing keys
frequently, and by enciphering each message in a different
random
<A HREF = "#MessageKey">message key</A>.
<A NAME = "DegreesOfFreedom"></A>
<P><DT><B>Degrees of Freedom</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the number of completely
independent values in a sample. The number of sampled values or
observations or bins, less the number of defined or freedom-limiting
relationships or "constraints" between those values.
<P>If we choose two values completely independently, we have a
DF of 2. But if we must choose two values such that the second is
twice the first, we can choose only the first value independently.
Imposing a relationship on one of the sampled value means that we
will have a DF of one less than the number of samples, even though
we may end up with apparently similar sample values.
<P>In a typical
<A HREF = "#GoodnessOfFit">goodness of fit</A> test such as
<A HREF = "#ChiSquare">chi-square</A>, the reference
<A HREF = "#Distribution">distribution</A> (the expected counts) is
normalized to give the same number of counts as the experiment. This
is a constraint, so if we have N bins, we will have a DF of N - 1.
<A NAME = "DES"></A>
<P><DT><B>DES</B>
<DD>The particular
<A HREF = "#BlockCipher">block cipher</A> which is the U.S. Data
Encryption Standard. A 64-bit block cipher with a 56-bit key
organized as 16
<A HREF = "#Round">rounds</A> of operations.
<A NAME = "Decibel"></A>
<P><DT><B>Decibel</B>
<DD>Ten times the base-10 logarithm of the ratio of two
<A HREF = "#Power">power</A> values. Denoted by dB.
One-tenth of a
<A HREF = "#Bel">bel</A>.
<P>When
<A HREF = "#Voltage">voltages</A> or
<A HREF = "#Current">currents</A> are measured, power changes
as the square of these values, so a decibel is twenty times the
base-10 logarithm of the ratio of two voltages or currents.
<A NAME = "Decimal"></A>
<P><DT><B>Decimal</B>
<DD>Base 10: The numerical representation in which each digit has an
<A HREF = "#Alphabet">alphabet</A> of ten symbols, usually 0 through 9.
Also see:
<A HREF = "#Binary">binary</A>,
<A HREF = "#Octal">octal</A>, and
<A HREF = "#Hexadecimal">hexadecimal</A>.
<A NAME = "DesignStrength"></A>
<P><DT><B>Design Strength</B>
<DD>The
<A HREF = "#Keyspace">keyspace</A>; the effort required for a
<A HREF = "#BruteForceAttack">brute force attack</A>.
<A NAME = "Deterministic"></A>
<P><DT><B>Deterministic</B>
<DD>A process whose sequence of operations is fully determined
by its initial
<A HREF = "#State">state</A>. A mechanical or clockwork-like
process whose outcome is inevitable, given its initial setting.
<A HREF = "#PseudoRandom">Pseudorandom</A>.
<A NAME = "DictionaryAttack"></A>
<P><DT><B>Dictionary Attack</B>
<DD>Typically an
<A HREF = "#Attack">attack</A> on a secret password. A dictionary
of common passwords is developed, and a
<A HREF = "#BruteForceAttack">brute force attack</A> conducted
on the target with each common password.
<A NAME = "DifferentialCryptanalysis"></A>
<P><DT><B>Differential Cryptanalysis</B>
<DD>A form of
<A HREF = "#Attack">attack</A> in which the difference between
values (or keys) is used to gain some information about the
system.
<P>Also see
<A HREF = "http://www.io.com/~ritter/RES/DIFFANA.HTM">Differential
Cryptanalysis: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page.
<A NAME = "Diffusion"></A>
<P><DT><B>Diffusion</B>
<DD>Diffusion is the property of an operation such that changing
one
<A HREF = "#Bit">bit</A>
(or <A HREF = "#Byte">byte</A>) of the input will change adjacent
or near-by bits (or bytes) after the operation. In a
<A HREF = "#BlockCipher">block cipher</A>, diffusion propagates
bit-changes from one part of a block to other parts of the block.
Diffusion requires
<A HREF = "#Mixing">mixing</A>, and the step-by-step process of
increasing diffusion is described as
<A HREF = "#Avalanche">avalanche</A>.
Diffusion is in contrast to <A HREF = "#Confusion">confusion</A>.
<P>Normally we speak of <I>data</I> diffusion, in which changing
a tiny part of the plaintext data may affect the whole ciphertext.
But we can also speak of <I>key</I> diffusion, in which changing
even a tiny part of the
<A HREF = "#Key">key</A> should change each bit in the
ciphertext with probability 0.5.
<P>Perhaps the best diffusing
<A HREF = "#Component">component</A> is
<A HREF = "#SimpleSubstitution">substitution</A>, but
this diffuses only within a single substituted value.
<A HREF = "#SubstitutionPermutation">Substitution-permutation</A>
ciphers get around this by moving the bits of each substituted element
to other elements, substituting again, and repeating. But this only
provides guaranteed diffusion if particular substitution tables are
constructed. Another alternative is to use some sort of
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A> which has
an inherently guaranteed diffusion, or a
<A HREF = "#VariableSizeBlockCipher">Variable Size Block Cipher</A>
construction.
Also see
<A HREF = "#OverallDiffusion">Overall Diffusion</A>.
<A NAME = "Digital"></A>
<P><DT><B>Digital</B>
<DD>Pertaining to discrete or distinct finite values. As
opposed to
<A HREF = "#Analog">analog</A>
or continuous quantities.
<A NAME = "Diode"></A>
<P><DT><B>Diode</B>
<DD>An
<A HREF = "#Electronic">electronic</A> device with two terminals
which allows
<A HREF = "#Current">current</A> to flow in only one direction.
<A NAME = "Distribution"></A>
<P><DT><B>Distribution</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the range of values which a
<A HREF = "#RandomVariable">random variable</A>, and the probability
that each value or range of values will occur. Also the probability
of test
<A HREF = "#Statistic">statistic</A> values for the case
"nothing unusual found," which is the
<A HREF = "#NullHypothesis">null hypothesis</A>.
<P>If we have a <I>discrete</I> distribution, with a finite number
of possible result values, we can speak of "frequency" and
"probability" distributions:
The "frequency distribution" is the expected <I>number</I> of
occurrences for each possible value, in a particular
<A HREF = "#Sample">sample</A> size.
The "probability distribution" is the <I>probability</I> of getting
each value, normalized to a probability of 1.0 over the sum of all
possible values.
<P>Here is a graph of a typical "discrete probability distribution"
or "discrete probability density function," which displays the
probability of getting a particular statistic value for the case
"nothing unusual found":
<PRE>
0.1| ***
| * * Y = Probability of X
Y | ** ** y = P(x)
| **** ****
0.0 -------------------
X
</PRE>
<P>Unfortunately, it is not really possible to think in the same way
about continuous distributions: Since continuous distributions have
an infinite number of possible values, the probability of getting
any <I>particular</I> value is zero. For continuous distributions,
we instead talk about the probability of getting a value in some
subrange of the overall distribution. We are often concerned with
the probability of getting a particular value or below, or the
probability of a particular value or above.
<P>Here is a graph of the related "cumulative probability distribution"
or "cumulative distribution function" (<A HREF = "#cdf">c.d.f.</A>)
for the case "nothing unusual found":
<PRE>
1.0| ******
| ** Y = Probability (0.0 to 1.0) of finding
Y | * a value which is x or less
| **
0.0 -******------------
X
</PRE>
<P>The c.d.f. is just the sum of all probabilities for a given value
or less. This is the usual sort of function used to interpret a
<A HREF = "#Statistic">statistic</A>: Given some result, we can
look up the probability of a lesser value (normally called
<I>p</I>) or a greater value (called <NOBR><I>q</I> =
1.0 - <I>p</I></NOBR>).
<P>Usually, a test statistic is designed so that extreme values are
not likely to occur by chance in the case "nothing unusual found"
which is the
<A HREF = "#NullHypothesis">null hypothesis</A>. So if we <I>do</I>
find extreme values, we have a strong argument that the results were
not due simply to random sampling or other random effects, and may
choose to reject the null hypothesis and thus accept the
<A HREF = "#AlternativeHypothesis">alternative hypothesis</A>.
<P>Common discrete distributions include the
<A HREF = "#BinomialDistribution">binomial distribution</A>, and the
<A HREF = "#PoissonDistribution">Poisson distribution</A>.
<A NAME = "Distributive"></A>
<P><DT><B>Distributive</B>
<DD>The case of a
<A HREF = "#Dyadic">dyadic</A> operation, which may be called
"multiplication," which can be applied to equations involving
another dyadic operation, which may be called "addition," such
that:
<NOBR>a(b + c) = ab + ac</NOBR> and
<NOBR>(b + c)a = ba + bc.</NOBR>
<P>Also see:
<A HREF = "#Associative">associative</A> and
<A HREF = "#Commutative">commutative</A>.
<A NAME = "DivideAndConquer"></A>
<P><DT><B>Divide and Conquer</B>
<DD>The general concept of being able to split a complexity into
several parts, each part naturally being less complex than the
total. If this is possible, The
<A HREF = "#Opponent">Opponent</A> may be able to solve all
of the parts far easier than the supposedly complex whole.
Often part of an <A HREF = "#Attack">attack</A>.
<P>This is a particular danger in cryptosystems, since most ciphers
are built from less-complex parts. Indeed, a major role of
cryptographic design is to combine small
<A HREF = "#Component">component</A> parts into a larger complex
<A HREF = "#System">system</A> which cannot be split apart.
<A NAME = "Domain"></A>
<P><DT><B>Domain</B>
<DD>The set of all arguments <I>x</I> which can be applied to a
<A HREF = "#Mapping">mapping</A>. Also see
<A HREF = "#Range">range</A>.
<A NAME = "Dyadic"></A>
<P><DT><B>Dyadic</B>
<DD>Relating to <I>dyad</I>, which is Greek for dual or having
two parts. In particular, a function with two inputs or arguments.
Also see:
<A HREF = "#Monadic">monadic</A>,
<A HREF = "#Unary">unary</A> and
<A HREF = "#Binary">binary</A>.
<A NAME = "DynamicKeying"></A>
<P><DT><B>Dynamic Keying</B>
<DD>That aspect of a cipher which allows a
<A HREF = "#Key">key</A> to be changed with
minimal overhead. A dynamically-keyed
<A HREF = "#BlockCipher">block cipher</A> might impose
little or no additional computation to change a key on a
block-by-block basis. The dynamic aspect of keying could be
just one of multiple keying mechanisms in the same cipher.
<P>One way to have a dynamic key in a block cipher is to include
the key value along with the
<A HREF = "#Plaintext">plaintext</A> data. But this is
normally practical only with blocks of huge size, or
<A HREF = "#VariableSizeBlockCipher">variable size</A> blocks.
<P>Another way to have a dynamic key in a block cipher is to add a
<A HREF = "#Confusion">confusion</A>
<A HREF = "#Layer">layer</A> which mixes the key value with the
block. For example, exclusive-OR could be used to mix a 64-bit
key with a 64-bit data block.
<A NAME = "DynamicSubstitutionCombiner"></A>
<P><DT><B>Dynamic Substitution Combiner</B>
<DD>The
<A HREF = "#Combiner">combining</A>
<A HREF = "#Mechanism">mechanism</A> described in U.S. Patent
4,979,832 (see the
<A HREF = "http://www.io.com/~ritter/#DynSubTech">Dynamic Substitution articles</A> on the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page).
<P>Dynamic Substitution is the use of an invertible
<A HREF = "#SubstitutionTable">substitution table</A>
in which the arrangement of the entries changes dynamically during
operation. This is particularly useful as a strong replacement for
the strengthless
<A HREF = "#ExclusiveOR">exclusive-OR</A> combiner in
<A HREF = "#StreamCipher">stream ciphers</A>.
<P>The arrangement of a
<A HREF = "#Key">keyed</A> table starts out unknown to an
<A HREF = "#Opponent">Opponent</A>. From the Opponent's point of
view, each table entry could be any possible value with uniform
probability.
But after the first value is mapped through that table, the used
transformation (table entry) is at least potentially exposed, and
no longer can be considered a completely unknown probability.
Dynamic Substitution acts to make the used transformation again
completely unknown and unbiased, by allowing it to take on any
possible mapping. As a first approximation, the amount of
information leaked about table contents is replaced by information
used to re-define each used entry.
<P>In the usual case, an invertible substitution table is keyed by
<A HREF = "#Shuffle">shuffling</A> under the control of a
<A HREF = "#RandomNumberGenerator">random number generator</A>.
One combiner input value is used to select a value from within
that table to be the result or output. The other combiner input
value is used simply to select an entry, and then the values at
the two selected entries are exchanged. So as soon as a
<A HREF = "#Plaintext">plaintext</A> mapping is used, it is
immediately reset to any possibility, and the more often any
plaintext value occurs, the more often that transformation changes.
<P>Also see
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A>, and
<A HREF = "#VariableSizeBlockCipher">Variable Size Block Cipher</A>.
<A NAME = "DynamicTransposition"></A>
<P><DT><B>Dynamic Transposition</B>
<DD>A
<A HREF = "#BlockCipher">block cipher</A> which first creates an
exact bit-<A HREF = "#Balance">balance</A> within each
<A HREF = "#Block">block</A>, and then
<A HREF = "#Shuffle">shuffles</A> the bits within a block,
each block being
<A HREF = "#Permutation">permuted</A> independently from a
<A HREF = "#Key">keyed</A>
<A HREF = "#RandomNumberGenerator">random number generator</A>.
<P>Since each block --
<A HREF = "#Plaintext">plaintext</A> or
<A HREF = "#Ciphertext">ciphertext</A> --
contains exactly the same number of 1's and 0's, every possible
plaintext block is just some permutation of any possible
ciphertext block. And since any possible plaintext block can be
produced from any ciphertext block in a vast plethora of different
ways, the keying sequence is hidden even from
<A HREF = "#KnownPlaintextAttack">known plaintext</A>. And
<A HREF = "#DefinedPlaintextAttack">defined plaintext</A>
is easily defeated with the usual
<A HREF = "#MessageKey">message key</A>.
To the extent that
every possible plaintext block can be produced, the cipher approaches
<A HREF = "#PerfectSecrecy">perfect secrecy</A>.
<P>See the article
<A HREF = "http://www.io.com/~ritter/ARTS/DYNTRAN2.HTM">Transposition
Cipher with Pseudo-Random Shuffling: The Dynamic Transposition
Combiner</A>.
<A NAME = "ECB"></A>
<P><DT><HR><P><B>ECB</B>
<DD>ECB or Electronic Code Book is an
<A HREF = "#OperatingMode">operating mode</A> for
<A HREF = "#BlockCipher">block ciphers</A>. Presumably the name
comes from the observation that a block cipher under a fixed
<A HREF = "#Key">key</A>
functions much like a physical codebook: Each possible
<A HREF = "#Plaintext">plaintext</A>
<A HREF = "#Block">block</A> value has a corresponding
<A HREF = "#Ciphertext">ciphertext</A> value, and vise versa.
<P>ECB is the naive method of applying a block cipher, in that
the plaintext is simply partitioned into
appropriate size blocks, and each block is enciphered separately
and independently. When we have a small block size, ECB is
generally unwise, because language text has biased statistics which
will result in some block values being re-used frequently, and this
repetition will show up in the raw ciphertext.
This is the basis for a successful
<A HREF = "#CodebookAttack">codebook attack</A>.
<P>On the other hand, if we have a large block, we may expect it
to contain enough (at least, say, 64 bits) uniqueness or
"<A HREF = "#Entropy">entropy</A>"
to prevent a codebook attack. In that case, ECB mode
has the advantage of supporting independent ciphering of each
block. This, in turn, supports various things, like the use of
multiple ciphering hardware operating in parallel for higher speeds.
<P>As another example, modern packet-switching network technologies
often deliver raw packets out of order. The packets will be
re-ordered eventually, but having out-of-sequence packets can be a
problem for low-level ciphering if the blocks are not ciphered
independently.
<A NAME = "ElectricField"></A>
<P><DT><B>Electric Field</B>
<DD>The fundamental physical force resulting from the attraction
of opposing charges.
<A NAME = "ElectromagneticField"></A>
<P><DT><B>Electromagnetic Field</B>
<DD>The remarkable self-propagating physical field consisting of
energy distributed between
<A HREF = "#ElectricField">electric</A> and
<A HREF = "#MagneticField">magnetic</A> fields. Energy in
the electric or potential field collapses and creates or
"charges up" a magnetic field. Energy in the magnetic field
collapses and "charges up" an electric field.
This process allows physical electrical and magnetic fields --
two fairly short-range phenomena -- to "propagate" and thus carry
energy over relatively large distances at the speed of light.
Examples include light, "radio" waves (including TV, cell phones,
etc.), and microwave cooking.
<P>It is important to distinguish between a true electromagnetic
field, and the simpler and range-limited electric and magnetic fields
produced by an electrical clock, motor, or power lines. It is also
important to distinguish between the light-like expanding or
"radiating" property of an electromagnetic field, and the damaging
ionizing radiation produced by a radioactive source.
<P>As far as we know -- and a great many experiments have been
conducted on this -- electromagnetic waves are not life-threatening
(unless they transfer enough power to dangerously heat the water in
our cells).
The belief that electromagnetic fields are not dangerous is also
<I>reasonable,</I> since light itself is an electromagnetic wave,
and life on Earth developed in the context of the electromagnetic
field from the Sun. Indeed, plants actually use that field to their
and our great benefit.
<A NAME = "Electronic"></A>
<P><DT><B>Electronic</B>
<DD>Having to do with the control and use of physical electrons, as
electrical potential or
<A HREF = "#Voltage">voltage</A>, electrical flow or
<A HREF = "#Current">current</A>, and generally both. See
<A HREF = "#Hardware">hardware</A> and
<A HREF = "#Component">component</A>.
<A NAME = "Encipher"></A>
<P><DT><B>Encipher</B>
<DD>The process which will transform information or
<A HREF = "#Plaintext">plaintext</A> into
one of plethora of intermediate forms or
<A HREF = "#Ciphertext">ciphertext</A>, as selected
by a <A HREF = "#Key">key</A>. The inverse of
<A HREF = "#Decipher">decipher</A>.
<A NAME = "Encryption"></A>
<P><DT><B>Encryption</B>
<DD>The general term for hiding information in secret
<A HREF = "#Code">code</A> or
<A HREF = "#Cipher">cipher</A>.
<A NAME = "Entropy"></A>
<P><DT><B>Entropy</B>
<DD>In information theory, our "uncertainty" as to the value of a
<A HREF = "#RandomVariable">random variable</A>. Given the
non-zero probability (<I>p</I>) of each value (<I>i</I>), we can
calculate an entropy (<I>H</I>) in
<A HREF = "#Bit">bits</A> for random variable <I>X</I> as:
<PRE>
H(X) = -SUM( p<SUB>i</SUB> log2 p<SUB>i</SUB> )
</PRE>
<P>Although entropy is sometimes taken as a measure of
<A HREF = "#Random">randomness</A>, calculating entropy requires
a knowledge of the probabilities of each value which we often
can attain only by
<A HREF = "#Sample">sampling</A>. This means that we do not
<I>really</I> know the "true" probabilities, but only those we see
in our samples. And the "true" probabilities may change through
time.
<P>By itself, calculated entropy also does not detect any underlying
order that might exist between value probabilities, such as a
<A HREF = "#Correlation">correlation</A>, or a
<A HREF = "#Linear">linear</A> relationship, or any other aspect of
cryptographically-weak randomness. The "true entropy" of a
<A HREF = "#RandomNumberGenerator">random number generator</A>
is just the number of bits in the
<A HREF = "#State">state</A> of that generator, as opposed to an
entropy computation on the sequence it produces.
So a high entropy value does <I>not</I> imply that a
<A HREF = "#ReallyRandom">really-random</A> source really <I>is</I>
random, or indeed have any relationship to the amount of
cryptographic randomness present.
<A NAME = "Ergodic"></A>
<P><DT><B>Ergodic</B>
<DD>In
<A HREF = "#Statistics">statistics</A> and
information theory, a particularly "simple" and easily modelled
<A HREF = "#StationaryProcess">stationary</A> (homogenous)
<A HREF = "#Stochastic">stochastic</A> (random)
<A HREF = "#Process">process</A> (function) in which the
"temporal average" is the same as the "ensemble average."
In general, a process in which no
<A HREF = "#State">state</A> is prevented from re-occurring.
Ergodic processes are the basis for many important results in
information theory, and are thus a technical requirement before
those results can be applied.
<P>Here we have all three possible sequences from a <B>non</B>-ergodic
process:
<B>across</B> we have the average of symbols through time (the
"temporal average"), and
<B>down</B> we have the average of symbols in a particular position
over all possible sequences (the "ensemble average"):
<PRE>
A B A B A B ... p(A) = 0.5, p(B) = 0.5, p(E) = 0.0
B A B A B A ... p(A) = 0.5, p(B) = 0.5, p(E) = 0.0
E E E E E E ... p(A) = 0.0, p(B) = 0.0, p(E) = 1.0
^ ^ ^ ^ ^ ^
+-+-+-+-+-+---- p(A) = 0.3, p(B) = 0.3, p(E) = 0.3
(From: Pierce, J. 1961. <I>Symbols, Signals and Noise.</I> Ch. 3)
</PRE>
When a process is non-ergodic, the measurements we take over time
from one or a few sequences may not represent all the sequences
which may be encountered.
<A NAME = "Extractor"></A>
<P><DT><B>Extractor</B>
<DD>In a cryptographic context, an extractor is a
<A HREF = "#Mechanism">mechanism</A> which
produces the inverse effect of a
<A HREF = "#Combiner">combiner</A>. This allows data to
be enciphered in a combiner, and then deciphered in an
extractor. Sometimes an extractor is exactly the same as the
combiner, as is the case for exclusive-OR.
<A NAME = "ExclusiveOR"></A>
<P><DT><B>Exclusive-OR</B>
<DD>A Boolean
<A HREF = "#LogicFunction">logic function</A> which is also
<A HREF = "#Mod2">mod 2</A> addition. Also called
<A HREF = "#XOR">XOR</A>.
<A NAME = "Factorial"></A>
<P><DT><HR><P><B>Factorial</B>
<DD>The <I>factorial</I> of natural number <I>n,</I> written
<B>n!</B>, is the
product of all
<A HREF = "#Integer">integers</A> from 1 to <I>n.</I>
<P>See the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Factorials">factorials</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "Fallacy"></A>
<P><DT><B>Fallacy</B>
<DD>In the philosophical study of
<A HREF = "#Logic">logic</A>, apparently-reasonable
arguments which lead to false conclusions. Also see:
<A HREF = "#InductiveReasoning">inductive reasoning</A> and
<A HREF = "#DeductiveReasoning">deductive reasoning</A>.
Including:
<OL TYPE = A>
<LI>Fallacies of Insufficient Evidence
<OL TYPE = 1>
<LI>Accident -- a special circumstance makes a rule inapplicable
<LI>Hasty Generalization
<LI><I>non causa pro causa</I> ("False Cause")
<UL>
<LI><I>post hoc ergo propter hoc</I> ("after this therefore
because of this")
<LI><I>reductio ad absurdum</I> -- the assumption that a
particular one of multiple assumptions is necessarily
false if the argument leads to a contradiction.
</UL>
<LI><I>ad ignorantium</I> ("Appeal to Ignorance") -- a belief
which is assumed true because it is not proven false.
<LI>Card Stacking -- a deliberate withholding of evidence which
does not support the author's conclusions.
</OL>
<LI>Fallacies of Irrelevance (<I>ignoratio elenchi</I>) -- ignoring
the question
<OL TYPE = 1>
<LI><I>ad hominem</I> ("Name Calling").
<LI><I>ad populum</I> ("Plain Folks") -- an appeal to the prejudices
and biases of the audience.
<LI><I>ad misericordiam</I> ("Appeal to Pity")
<LI><I>ad verecundiam</I> ("Inappropriate Authority") -- a
testimonial from someone with expertise in a different field.
<LI><I>tu quoque</I> ("You Did It Too").
<LI><I>ad baculum</I> ("Appeal to force") -- e.g., threats.
<LI>Red Herring -- information used to throw the discussion
off track.
<LI>Opposition ("Guilt by Association") -- to condemn an idea
because of who is for it.
<LI>Genetic -- attacking the source of the idea, rather than
the idea itself.
<LI>Bandwagon
</OL>
<LI>Fallacies of Ambiguity
<OL TYPE = 1>
<LI>Equivocation -- the use of a word in a sense different than
that understood by the reader.
<LI>Amphiboly -- some sentences admit more than one interpretation.
<LI>Accent -- some sentences have different meanings depending
on which word is stressed.
<LI>Composition -- the implication that what is true of the parts
must also be true of the whole.
<LI>Division -- the implication that what is true of the whole
must be true of its parts.
<LI>False Analogy
</OL>
<LI>Fallacies of the Misuse of Logic
<OL TYPE = 1>
<LI><I>petitio principii</I> ("Begging the Question") --
restating one of the premises as the conclusion; assuming
the truth of a proposition which needs to be proven.
<UL>
<LI><I>circulus in probando</I> ("Circular Argument")
</UL>
<LI><I>non sequitur</I> ("Does Not Follow") -- the stated conclusion
does not follow from the evidence supplied.
<LI><I>plurimum interrogationum</I> ("Complex Question") -- e.g.,
"When did you stop beating your wife?"
<LI>Garbled Syllogism -- an illogical argument phrased in logical
terms.
<LI>Either-Or -- assuming a question has only two sides.
</OL>
</OL>
<A NAME = "FastWalshTransform"></A>
<P><DT><B>Fast Walsh Transform</B>
<DD>(Also Walsh-Hadamard transform.) When applied to a
<A HREF = "#BooleanFunction">Boolean function</A>, a Fast Walsh
Transform is essentially a correlation count between the given
function and each Walsh function. Since the Walsh functions are
essentially the
<A HREF = "#AffineBooleanFunction">affine Boolean functions</A>,
the FWT computes the
<A HREF = "#UnexpectedDistance">unexpected distance</A> from a given
function to each affine function. It does this in time proportional to
<I>n</I> log </I>n,</I> for functions of <I>n</I> bits, with <I>n</I>
some power of 2.
<P>If two Boolean functions are <I>not</I> correlated, we expect
them to agree half the time, which we might call the "expected
<A HREF = "#HammingDistance">distance</A>." When two Boolean
functions <I>are</I> correlated, they
will have a distance greater or less than the expected distance,
and we might call this difference the
<A HREF = "#UnexpectedDistance">unexpected distance</A> or UD.
The UD can be positive or negative, representing distance to a
particular affine function or its complement.
<P>It is easy to do a Fast Walsh Transform by hand. (Well, I say
"easy," then always struggle when I actually do it.) Let's do the
FWT of function f: (1 0 0 1 1 1 0 0): First note that f has a binary
power length, as required. Next, each pair of elements is modified
by an "in-place butterfly"; that is, the values in each pair produce
two results which replace the original pair, wherever they were
originally located. The left result will be the two values added;
the right will be the first less the second. That is,
<PRE>
(a',b') = (a+b, a-b)
</PRE>
<P>So for the values (1,0), we get (1+0, 1-0) which is just (1,1).
We start out pairing adjacent elements, then every other element,
then every 4th element, and so on until the correct pairing is
impossible, as shown:
<PRE>
original 1 0 0 1 1 1 0 0
^---^ ^---^ ^---^ ^---^
first 1 1 1 -1 2 0 0 0
^-------^ ^-------^
^-------^ ^-------^
second 2 0 0 2 2 0 2 0
^---------------^
^---------------^
^---------------^
^---------------^
final 4 0 2 2 0 0 -2 2
</PRE>
<P>The result is the "unexpected distance" to each
<A HREF = "#AffineBooleanFunction">affine Boolean function</A>.
The higher the absolute value, the greater the "linearity";
if we want the
<A HREF = "#Nonlinearity"><I>non</I>linearity</A>, we must subtract
the absolute value of each unexpected distance from the expected
value, which is half the number of bits in the function. Note that
the range of possible values increases by a factor of 2 (in both
positive and negative directions) in each sublayer mixing; this is
information expansion, which we often try to avoid in cryptography.
<P>Also see:
<A HREF = "http://www.io.com/~ritter/RES/WALHAD.HTM">Walsh-Hadamard
Transforms: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page, and the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/NONLMEAS.HTM">Active
Boolean Function Nonlinearity Measurement in JavaScript</A> page of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<P>The FWT provides a strong mathematical basis for
<A HREF = "#BlockCipher">block cipher</A> mixing such that all input
values will have an equal chance to affect all output values.
Cryptographic mixing then occurs in butterfly operations based on
<A HREF = "#BalancedBlockMixing">balanced block mixing</A> structures
which replace the simple add / subtract butterfly in the FWT and
confine the value ranges so information expansion does not occur.
A related concept is the well-known
<A HREF = "#FFT">FFT</A>, which can use exactly the same mixing
patterns as the FWT.
<A NAME = "FCSR"></A>
<P><DT><B>FCSR</B>
<DD>Feedback with Carry Shift Register. A sequence generator
analogous to a
<A HREF = "#LFSR">LFSR</A>, but separately storing and using
a "carry" value from the computation.
<A NAME = "FeistelConstruction"></A>
<P><DT><B>Feistel Construction</B>
<DD>The Feistel construction is the widely-known method
of constructing block ciphers used in
<A HREF = "#DES">DES</A>. Horst Feistel worked for IBM in
the 60's and 70's, and was awarded a number of crypto patents,
including: 3,768,359, 3,768,360, and 4,316,055.
<P>Normally, in a Feistel construction, the input block is split
into two parts, one of which drives a transformation whose result
is exclusive-OR combined into the other block. Then the "other
block" value feeds the same transformation, whose result is
exclusive-OR combined into the first block. This constitutes 2 of
perhaps 16 "<A HREF = "#Round">rounds</A>."
<PRE>
L R
| |
|--> F --> + round 1
| |
+ <-- F <--| round 2
| |
v v
L' R'
</PRE>
<P>One advantage of the Feistel construction is that the
transformation does not need to be invertible. To reverse any
particular layer, it is only necessary to apply the same
transformation again, which will undo the changes of the original
exclusive-OR.
<P>A disadvantage of the Feistel construction is that
<A HREF = "#Diffusion">diffusion</A>
depends upon the internal transformation. There is no guarantee of
<A HREF = "#OverallDiffusion">overall diffusion</A>, and the number
of rounds required is often found by experiment.
<A NAME = "FencedDES"></A>
<P><DT><B>Fenced DES</B>
<DD>A
<A HREF = "#BlockCipher">block cipher</A> with three
<A HREF = "#Layer">layers</A>, in which the outer layers consist of
<A HREF = "#Fencing">fencing</A> tables,
and the inner layer consists of
<A HREF = "#DES">DES</A> used as a
<A HREF = "#Component">component</A>. For block widths over 64 bits,
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A> technology
assures that any bit change is propagated to each DES operation.
<P>Also see the
<A HREF = "http://www.io.com/~ritter/#FencedTech">Fenced DES</A>
section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page, and
<A HREF = "http://www.io.com/~ritter/KEYSHUF.HTM">A Keyed Shuffling
System for Block Cipher Cryptography</A>.
<A NAME = "Fencing"></A>
<P><DT><B>Fencing</B>
<DD>Fencing is a term-of-art which describes a layer of
<A HREF = "#SubstitutionTable">substitution tables</A>.
In schematic or data-flow diagrams, the row of tiny substitution
boxes stands like a picket fence between the data on each side.
<A NAME = "FencingLayer"></A>
<P><DT><B>Fencing Layer</B>
<DD>A fencing
<A HREF = "#Layer">layer</A> is a
<A HREF = "#VariableSizeBlockCipher">variable size block cipher</A>
layer composed of small (and therefore realizable)
<A HREF = "#SimpleSubstitution">substitutions</A>. Typically
the layer contains many separate
<A HREF = "#Key">keyed</A>
<A HREF = "#SubstitutionTable">substitution tables</A>. To
make the layer extensible, either the substitutions can be
re-used in some order, or in some pre-determined sequence, or
the table to be used at each position selected by some computed
value.
<P><A HREF = "#Fencing">Fencing</A> layers are also used in
other types of cipher.
<A NAME = "FFT"></A>
<P><DT><B>FFT</B>
<DD>Fast Fourier Transform. A numerically advantageous
way of computing a
<A HREF = "#FourierTransform">Fourier transform</A>.
Basically a way of transforming information from
<A HREF = "#Amplitude">amplitude</A> values sampled periodically
through time, into amplitude values sampled periodically through
complex
<A HREF = "#Frequency">frequency</A>. The FFT performs this
transformation in time proportional to n log n, for some n a
power of 2.
<P>While exceedingly valuable, the FFT tends to run into practical
problems in use which can require a deep understanding of the
process. For example, the transform assumes that the waveform is
"stationary" and thus repetitive and continuous, which is rarely the
case. As another example, sampling a continuous wave can create
spurious "frequency" values related to the sampling and not the
wave itself. Also the range of possible values increases by a
factor of 2 (in both positive and negative directions) in every
sublayer mixing; this is information expansion, which we often try
to avoid in cryptography.
<P>The FFT provides a strong mathematical basis for
<A HREF = "#BlockCipher">block cipher</A> mixing such that all input
values will have an equal chance to affect all output values.
Cryptographic mixing then occurs in butterfly operations based on
<A HREF = "#BalancedBlockMixing">balanced block mixing</A> structures
which replace the simple add / subtract butterfly in the FFT and
confine the value ranges so information expansion does not occur.
A related concept is the
<A HREF = "#FastWalshTransform">fast Walsh-Hadamard transform</A>
(FWT), which can use exactly the same mixing patterns as the FFT.
<A NAME = "Field"></A>
<P><DT><B>Field</B>
<DD>In abstract algebra, a commutative
<A HREF = "#Ring">ring</A> in which all non-zero elements have a
multiplicative inverse. (This means we can divide.)
<P>In general, a field supports the four basic operations
(addition, subtraction, multiplication and division), and satisfies
the normal rules of arithmetic. An operation on any two elements
in a field is a result which is also an element in the field.
<P>Examples of fields include rings of
<A HREF = "#Integer">integers</A>
<A HREF = "#Modulo">modulo</A> some
<A HREF = "#Prime">prime</A>.
Here are multiplication tables under mod 2, mod 3 and mod 4:
<PRE>
0 1 0 1 2 0 1 2 3
0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 0 1 2 1 0 1 2 3
2 0 2 1 2 0 2 0 2
3 0 3 2 1
</PRE>
In a field, each element must have an inverse, and the product of
an element and its inverse is 1. This means that every non-zero
row and column of the multiplication table for a field must
contain a 1. Since row 2 of the mod 4 table does not contain a 1,
the set of integers mod 4 is not a field.
<P>The <I>order</I> of a field is the number of elements in that
field. The integers mod <I>p</I> form a
<A HREF = "#FiniteField">finite field</A> of order <I>p.</I>
Similarly,
<A HREF = "#Mod2Polynomial">mod 2 polynomials</A> will form a field
with respect to an
<A HREF = "#Irreducible">irreducible</A>
<A HREF = "#Polynomial">polynomial</A>,
and will have order 2<SUP>n</SUP>, which is a very useful size.
<A NAME = "FiniteField"></A>
<P><DT><B>Finite Field</B>
<DD>A
<A HREF = "#GaloisField">Galois field</A>: A mathematical
<A HREF = "#Field">field</A> of non-infinite
<A HREF = "#Order">order.</A> As opposed to an <I>infinite field,</I>
such as the integers, rationals, reals and complex numbers.
<P><UL>
<LI>In a finite field, every nonzero element <I>x</I> can be squared,
cubed, and so on, and at some power will eventually become 1. The
smallest (positive) power <I>n</I> at which <NOBR><I>x<SUP>n</SUP></I> = 1</NOBR>
is the <I>order</I> of element <I>x</I>. This of course makes
<I>x</I> an "<I>nth
<A HREF = "#Root">root</A> of unity,</I>" in that it satisfies the
equation <NOBR><I>x<SUP>n</SUP></I> = 1</NOBR>.
<LI>A finite field of order <I>q</I> will have one or more
<I><A HREF = "#Primitive">primitive</A></I> elements <I>a</I> whose
order is <I>q</I>-1 and whose powers cover all nonzero field elements.
<LI>For every element <I>x</I> in a finite field of order <I>q</I>,
<NOBR><I>x<SUP>q</SUP> = x.</I></NOBR>
</UL>
<A NAME = "FlipFlop"></A>
<P><DT><B>Flip-Flop</B>
<DD>A class of
<A HREF = "#Digital">digital</A>
<A HREF = "#Logic">logic</A>
<A HREF = "#Component">component</A>
which has a single
<A HREF = "#Bit">bit</A> of
<A HREF = "#State">state</A> with various control signals to
effect a state change. There are several common versions:
<UL>
<P><LI>Latch -- the output follows the input, but only while the
<A HREF = "#Clock">clock</A> input is "1"; lowering the clock
prevents the output from changing.
<P><LI>SR FF -- Set / Reset; typically created by cross-connecting
two 2-input NAND
<A HREF = "#Gate">gates</A>, in which case the inputs are
complemented: a "0" on the S input forces a stable "1" state,
which is held until a "0" on the R input forces a "0".
<P><LI>D or "delay" FF -- senses the input value at the time of a
particular <A HREF = "#Clock">clock</A> transition.
<P><LI>JK FF -- the J input is an
<A HREF = "#AND">AND</A>
enable for a clocked or synchronous transition to "1"; the K
input is an AND enable for a clocked transition to "0"; and often
there are S and R inputs to force "1" or "0" (respectively)
asynchronously.
</UL>
<A NAME = "FourierSeries"></A>
<P><DT><B>Fourier Series</B>
<DD>An infinite series in which the terms are constants (A, B)
multiplied by sine or cosine functions of integer multiples (n) of
the variable (x). One way to write this would be:
<BIG><PRE>
f(x) = A<SUB>0</SUB> + SUM (A<SUB>n</SUB> cos nx + B<SUB>n</SUB> sin nx)
</PRE></BIG>
Alternately, over the interval [a, a+2c]:
<PRE>
f(x) = a<SUB>0</SUB> + SUM ( a<SUB>n</SUB> cos(n PI x/c) + b<SUB>n</SUB> sin(n PI x/c) )
a<SUB>n</SUB> = 1/c INTEGRAL[a,a+2c]( f(x) cos(n PI x/c) dx )
b<SUB>n</SUB> = 1/c INTEGRAL[a,a+2c]( f(x) sin(n PI x/c) dx )
</PRE>
<A NAME = "FourierTheorem"></A>
<P><DT><B>Fourier Theorem</B>
<DD>Under suitable conditions any periodic function can be
represented by a
<A HREF = "#FourierSeries">Fourier series</A>. (Various other
"orthogonal functions" are now known.)
<P>The use of sine and cosine functions is particularly interesting,
since each term represents a single
<A HREF = "#Frequency">frequency</A> oscillation. So to
the extent that we can represent an
<A HREF = "#Amplitude">amplitude</A> waveform as a series of sine
and cosine functions, we thus describe the frequency spectrum
associated with that waveform. This frequency spectrum describes
the frequencies which must be handled by a
<A HREF = "#Circuit">circuit</A> to reproduce the original waveform.
This illuminating computation is called a
<A HREF = "#FourierTransform">Fourier transform</A>.
<A NAME = "FourierTransform"></A>
<P><DT><B>Fourier Transform</B>
<DD>The Fourier transform relates
<A HREF = "#Amplitude">amplitude</A> samples at periodic discrete
times to amplitude samples at periodic discrete
<A HREF = "#Frequency">frequencies</A>. There are thus two
representations: the amplitude vs. time waveform, and the
amplitude vs. complex frequency (magnitude and phase) spectrum.
Exactly the same information is present in either representation,
and the transform supports converting either one into the other.
This computation is efficiently performed by the
<A HREF = "#FFT">FFT</A>.
<P>In a cryptographic context, one of the interesting parts of the
Fourier transform is that it represents a thorough
<A HREF = "#Mixing">mixing</A> of each input value to every output
value.
<A NAME = "Frequency"></A>
<P><DT><B>Frequency</B>
<DD>The number of repetitions or <I>cycles</I> per second.
Now measured in Hertz (Hz); previously called cycles-per-second
(cps).
<A NAME = "Function"></A>
<P><DT><B>Function</B>
<DD>A
<A HREF = "#Mapping">mapping</A>; sometimes specifically confined
to numbers.
<A NAME = "FWT"></A>
<P><DT><B>FWT</B>
<DD><A HREF = "#FastWalshTransform">Fast Walsh Transform</A>.
<A NAME = "Gain"></A>
<P><DT><HR><P><B>Gain</B>
<DD>The
<A HREF = "#Amplitude">amplitude</A> change due to
<A HREF = "#Amplifier">amplification</A>. A negative gain
is in fact a <I>loss.</I>
<A NAME = "GaloisField"></A>
<P><DT><B>Galois Field</B>
<DD><A HREF = "#FiniteField">Finite field</A>. First encountered by
the 19-year-old student Evariste Galois, in 1830 France, a year or so
before dying in a duel.
<A NAME = "Gate"></A>
<P><DT><B>Gate</B>
<DD>A
<A HREF = "#Digital">digital</A>
<A HREF = "#Logic">logic</A>
<A HREF = "#Component">component</A>
which is a simple logic function, possibly with a complemented
output. Some common
<A HREF = "#Boolean">Boolean</A> logic gates include:
<UL>
<LI><A HREF = "#AND">AND</A>
<LI><A HREF = "#OR">OR</A>
<LI><A HREF = "#ExclusiveOR">Exclusive-OR</A>
<LI>NAND -- <A HREF = "#AND">AND</A> with output complement
<LI>NOR -- <A HREF = "#OR">OR</A> with output complement
<LI>Exclusive-NOR -- <A HREF = "#ExclusiveOR">Exclusive-OR</A>
with output complement
<LI>NOT -- the complement
</UL>
<A NAME = "GF2n"></A>
<P><DT><B>GF 2<SUP>n</SUP></B>
<DD>The
<A HREF = "#GaloisField">Galois field</A> or
<A HREF = "#FiniteField">finite field</A> of 2<SUP>n</SUP>
<A HREF = "#Polynomial">polynomials</A> of
degree n-1 or less.
<P>Typically we have
<A HREF = "#Mod2Polynomial">mod 2 polynomials</A> with results
reduced "modulo" an
<A HREF = "#Irreducible">irreducible</A> "generator" polynomial
<I>g</I> of degree <I>n.</I> This is analogous to creating a
<A HREF = "#Field">field</A> from the
<A HREF = "#Integer">integers</A>
<A HREF = "#Modulo">modulo</A> some
<A HREF = "#Prime">prime</A> <I>p.</I>
<P>For example, consider GF(2<SUP>4</SUP>) using the generator
polynomial x<SUP>4</SUP> + x + 1, or 10011, which is a degree-4
<A HREF = "#Irreducible">irreducible</A>. First we multiply
two elements as usual:
<PRE>
1 0 1 1
* 1 1 0 0
----------
0
0
1 0 1 1
1 0 1 1
---------------
1 1 1 0 1 0 0
</PRE>
Then we "reduce" the result modulo the generator polynomial:
<PRE>
1 1 0
----------------
1 0 0 1 1 ) 1 1 1 0 1 0 0
1 0 0 1 1
---------
1 1 1 0 0
1 0 0 1 1
---------
1 1 1 1 0
1 0 0 1 1
---------
1 1 0 1
=========
</PRE>
<P>So, if I did the arithmetic right, the result is the remainder,
1101. I refer to this as arithmetic "mod 2, mod p".
<P>An <A HREF = "#Irreducible">irreducible</A> is sufficient
to form a finite field. However, some special irreducibles are
also <A HREF = "#PrimitivePolynomial">primitive</A>, and these
create "maximal length" sequences in <A HREF = "#LFSR">LFSR</A>'s.
<A NAME = "GoodnessOfFit"></A>
<P><DT><B>Goodness of Fit</B>
<DD>In
<A HREF = "#Statistics">statistics</A>,
a test used to compare two
<A HREF = "#Distribution">distributions</A>. For
<A HREF = "#Nominal">nominal</A> or "binned" measurements, a
<A HREF = "#ChiSquare">chi-square</A> test is common. For
<A HREF = "#Ordinal">ordinal</A> or ordered measurements, a
<A HREF = "#KolmogorovSmirnov">Kolmogorov-Smirnov</A> test is
appropriate.
<P>Goodness-of-fit tests can <I>at best</I> tell us whether one
distribution <B>is</B> or <B>is not</B> the same as the other,
and they say even <I>that</I> only with some probability. It is
important to be very careful about experiment design, so that,
almost always, "nothing unusual found" is the goal we seek. When
we can match distributions, we are obviously able to state exactly
what the experimental distribution should be and is. But there
are <I>many</I> ways in which distributions can differ, and simply
finding a difference is <I>not</I> evidence of a specific effect.
(See <A HREF = "#NullHypothesis">null hypothesis</A>.)
<A NAME = "Group"></A>
<P><DT><B>Group</B>
<DD>In abstract algebra, a nonempty set <I>G</I> with one
<A HREF = "#Dyadic">dyadic</A>
(two-input, one-output) operation which we choose to call
"multiplication" and denote * as usual.
If elements (not necessarily numbers) <I>a, b</I> are in <I>R,</I>
then <I>ab</I> (or <I>a*b</I>) is also in <I>R.</I> The following
properties hold:
<OL>
<LI><B>Multiplication is associative:</B> (ab)c = a(bc)
<LI><B>There is a multiplicative identity:</B> for e in G,
ea = ae = a
<LI><B>There is a multiplicative inverse:</B> for a in G,
there is an a<SUP>-1</SUP> in G such that
<NOBR>a<SUP>-1</SUP>a = e = aa<SUP>-1</SUP></NOBR>
</OL>
<P>A group is basically a
<A HREF = "#Mapping">mapping</A> from two elements in the
group, through the group operation <I>m,</I> into the same group:
<BIG><PRE>
<I>m</I>:G x G -> G
</PRE></BIG>
<A NAME = "HammingDistance"></A>
<P><DT><HR><P><B>Hamming Distance</B>
<DD>A measure of the difference or "distance" between two binary
sequences of equal length; in particular, the number of bits which
differ between the sequences. This is the
<A HREF = "#Weight">weight</A> or the number of 1-bits in the
<A HREF = "#ExclusiveOR">exclusive-OR</A> of the two sequences.
<A NAME = "Hardware"></A>
<P><DT><B>Hardware</B>
<DD>The physical realization of computation. Typically, the
<A HREF = "#Electronic">electronic</A>
<A HREF = "#Digital">digital</A>
<A HREF = "#Logic">logic</A>, power supply, and various
electro-mechanical
<A HREF = "#Component">components</A> such as disk drives,
<A HREF = "#Switch">switches</A>, and possibly
<A HREF = "#Relay">relays</A> which make up a
<A HREF = "#Computer">computer</A> or other digital
<A HREF = "#System">system</A>. As opposed to
<A HREF = "#Software">software</A>. See
<A HREF = "#SystemDesign">system design</A> and
<A HREF = "#Debug">debug</A>.
<A NAME = "Hash"></A>
<P><DT><B>Hash</B>
<DD>A classic
<A HREF = "#Computer">computer</A> operation which forms a
fixed-size result
from an arbitrary amount of data. Ideally, even the smallest
change to the input data will change about half of the bits in
the result. Often used for table look-up, so that very similar
language terms or phrases will be well-distributed throughout
the table. Also often used for error-detection, and, known as a
<A HREF = "#MessageDigest">message digest</A>,
<A HREF = "#Authentication">authentication</A>.
<P>A hash of data will produce a particular hash value, which then
can be included in the message before it is sent (or stored). When
the data are received (or read) and the hash value computed, this
should match the included hash value. So if the hash is different,
something has changed, and the usual solution is to request the data
be sent again. But the hash value is typically much smaller than
the data, so there <I>must</I> be "many" different data sets which
will produce that same value. This means that "error detection"
inherently cannot detect all possible errors, and this is quite
independent of any "linearity" in the hash computation.
<P>An excellent example of a hash function is a
<A HREF = "#CRC">CRC</A> operation. CRC is a
<A HREF = "#Linear">linear</A> function without cryptographic
<A HREF = "#Strength">strength</A>, but does have a strong
mathematical basis which is lacking in <I>ad hoc</I> methods.
Strength is not needed when
<A HREF = "#Key">keys</A> are processed into the
<A HREF = "#State">state</A> used in a
<A HREF = "#RandomNumberGenerator">random number generator</A>,
because if either the key or the state becomes known, the keyed
<A HREF = "#Cipher">cipher</A> has been broken.
<P>In contrast, a <I>cryptographic</I> hash function must be
"strong" in the sense that it must be "computationally
infeasible" to find two input values which produce the same hash
result. In general, this means that the hash result should
be 128 bits or more in size.
<P>Sometimes a cryptographic hash function is described as
being "collision free," which is a misnomer. A collision occurs
when two different texts produce exactly the same hash result.
Given enough texts, collisions will of course occur, precisely
because any fixed-size result has only so many possible code
values. The intent is that collisions be hard to find and
particular hash values impossible to create at will.
<A NAME = "Hexadecimal"></A>
<P><DT><B>Hexadecimal (Hex)</B>
<DD>Base 16. The numerical representation in which each digit has an
<A HREF = "#Alphabet">alphabet</A> of sixteen symbols, generally
0 through 9, plus A through F, or "a" through "f".
<P>Each hex value represents exactly four
<A HREF = "#Bit">bits</A>, which can be particularly convenient.
Also see:
<A HREF = "#Binary">binary</A>,
<A HREF = "#Octal">octal</A>, and
<A HREF = "#Decimal">decimal</A>.
<A NAME = "Homophonic"></A>
<P><DT><B>Homophonic</B>
<DD>Greek for "the same sound." The concept of having different
letter sequences which are pronounced alike. In
<A HREF = "#Cryptography">cryptography</A>, a
<A HREF = "#Cipher">cipher</A> which translates a single
<A HREF = "#Plaintext">plaintext</A> symbol into any one of multiple
<A HREF = "#Ciphertext">ciphertext</A> symbols which all have the
same meaning. Also see
<A HREF = "#Polyphonic">polyphonic</A>,
<A HREF = "#Polygraphic">polygraphic</A> and
<A HREF = "#Monographic">monographic</A>.
<A NAME = "HomophonicSubstitution"></A>
<P><DT><B>Homophonic Substitution</B>
<DD>A type of
<A HREF = "#Substitution">substitution</A> in which an original
symbol is replaced by any one of multiple unique symbols.
Intended to combat the property of
<A HREF = "#SimpleSubstitution">simple substitution</A> in which
the most-frequent symbols in the
<A HREF = "#Plaintext">plaintext</A> always produce the
most-frequent symbols in the
<A HREF = "#Ciphertext">ciphertext</A>.
<P>A form of
<A HREF = "#Homophonic">homophonic</A> substitution is available
in a large
<A HREF = "#BlockCipher">block cipher</A>, where a homophonic
selection field is enciphered along with the plaintext.
Any of the possible values for that field naturally will produce
a unique ciphertext. After deciphering any of those
ciphertexts, the homophonic selection field could be deleted,
and the exact same plaintext recovered. Note that the ability
to produce a multitude of different encipherings for exactly
the same data is related to the concept of a
<A HREF = "#Key">key</A>.
<A NAME = "IDEA"></A>
<P><DT><HR><P><B>IDEA</B>
<DD>The
<A HREF = "#SecretKeyCipher">secret key</A>
<A HREF = "#BlockCipher">block cipher</A> used in
<A HREF = "#PGP">PGP</A>. Designed by James Massey and Xuejia Lai
in several installments, called PES, IPES and IDEA. It is
<A HREF = "#Round">round</A>-based, with a 64-bit
<A HREF = "#Block">block</A> size, a 128-bit
<A HREF = "#Key">key</A>, and no internal
<A HREF = "#SubstitutionTable">tables</A>.
<P>The disturbing aspect of the IDEA design is the extensive use
of <I>almost</I>
<A HREF = "#Linear">linear</A> operations, and no nonlinear tables
at all. While technically <I>non</I>linear, the internal operations
seem like they might well be linear <I>enough</I> to be attacked.
<A NAME = "IdealSecrecy"></A>
<P><DT><B>Ideal Secrecy</B>
<DD>The
<A HREF = "#Strength">strength</A> delivered by even a simple
<A HREF = "#Cipher">cipher</A> when each and every
<A HREF = "#Plaintext">plaintext</A> is equally probable and
independent of every other plaintext.
<P>There are various examples:
<UL>
<LI>The use of
<A HREF = "#CBC">CBC mode</A> in
<A HREF = "#DES">DES</A>: By making every plaintext block
equally probable, DES is greatly strengthened against
<A HREF = "#CodebookAttack">codebook attack</A>.
<LI>The transmission of
<A HREF = "#Random">random</A>
<A HREF = "#MessageKey">message key</A> values: To the
extent that every value is equally probable, even a very
simple cipher is sufficient to protect those values.
<LI>The use of a keyed
<A HREF = "#SimpleSubstitution">simple substitution</A>
<I>of the ciphertext</I> to add strength, as used in the
Penknife
<A HREF = "#StreamCipher">stream cipher</A> design.
<LI>The use of data compression to reduce the redundancy in a
message before ciphering: This of course can only
<I>reduce</I> language redundancy. (Also, many compression
techniques send pre-defined tables before the data and so are
not suitable in this application.)
</UL>
<P>Also see:
<A HREF = "#PerfectSecrecy">perfect secrecy</A>.
From Claude Shannon.
<A NAME = "i.i.d."></A>
<P><DT><B>i.i.d.</B>
<DD>In
<A HREF = "#Statistics">statistics</A>: Independent,
Identically Distributed. Generally related to the
<A HREF = "#Random">random</A>
<A HREF = "#Sample">sampling</A> of a single
<A HREF = "#Distribution">distribution</A>.
<A NAME = "InductiveReasoning"></A>
<P><DT><B>Inductive Reasoning</B>
<DD>In the study of
<A HREF = "#Logic">logic</A>, reasoning from the observation of
some particular cases to produce a general statement. While often
incorrect, inductive reasoning does provide a way to go beyond
known truth to new statements which may then be tested. And
certain types of inductive reasoning can be assigned a correctness
probability using
<A HREF = "#Statistics">statisticical</A> techniques.
Also see:
<A HREF = "#DeductiveReasoning">deductive reasoning</A> and
<A HREF = "#Fallacy">fallacy</A>.
<A NAME = "Inductor"></A>
<P><DT><B>Inductor</B>
<DD>A basic
<A HREF = "#Electronic">electronic</A>
<A HREF = "#Component">component</A>
which acts as a reservoir for electrical power in the form of
<A HREF = "#Current">current</A>.
An inductor thus acts to "even out" the current flowing through it,
and to "emphasize" current changes across the terminals.
An inductor conducts
<A HREF = "#DC">DC</A> and opposes
<A HREF = "#AC">AC</A> in proportion to
<A HREF = "#Frequency">frequency</A>.
Inductance is measured in Henrys: A
<A HREF = "#Voltage">voltage</A> of 1 Volt across an inductance
of 1 Henry produces a current change of 1 Ampere per Second
through the inductor.
<P>Typically a
coil
or multiple turns of
<A HREF = "#Conductor">conductor</A>
wound on a magnetic or ferrous core.
<A HREF = "#Current">Current</A> in the conductor creates a
<A HREF = "#MagneticField">magnetic field</A>, thus "storing" charge.
When power is removed, the magnetic field collapses to maintain the
current flow; this can produce high voltages, as in automobile spark
coils.
<P>Also see
<A HREF = "#Capacitor">capacitor</A> and
<A HREF = "#Resistor">resistor</A>.
<A NAME = "Injective"></A>
<P><DT><B>Injective</B>
<DD><A HREF = "#OneToOne">One-to-one</A>. A
<A HREF = "#Mapping">mapping</A> f: <I>X -> Y</I> where no two
values <I>x</I> in <I>X</I> produce the same result
<I>f(x)</I> in <I>Y.</I>
A one-to-one mapping is invertible for those values of <I>X</I>
which produce unique results <I>f(x)</I>, but there may not be a
full inverse mapping g: <I>Y -> X</I>.
<A NAME = "Insulator"></A>
<P><DT><B>Insulator</B>
<DD>A material in which electron flow is difficult or impossible.
Classically air or vacuum, or wood, paper, glass, ceramic, plastic,
etc. As opposed to a
<A HREF = "#Conductor">conductor</A>.
<A NAME = "Integer"></A>
<P><DT><B>Integer</B>
<DD>An element in the set consisting of
<I>counting</I> numbers: 1, 2, 3, ...,
their negatives: -1, -2, -3, ...,
and zero.
<A NAME = "IntermediateBlock"></A>
<P><DT><B>Intermediate Block</B>
<DD>In the context of a
<A HREF = "#Layer">layered</A>
<A HREF = "#BlockCipher">block cipher</A>, the data values
produced by one layer then used by the next.
<P>In some realizations, an intermediate
<A HREF = "#Block">block</A> might be wired connections between layer
<A HREF = "#Hardware">hardware</A>. In the context of a general
purpose
<A HREF = "#Computer">computer</A>, an intermediate block might
represent the movement of data between operations, or perhaps
transient storage in the original block.
<A NAME = "Interval"></A>
<P><DT><B>Interval</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, measurements in which
the numerical value has meaning. Also see:
<A HREF = "#Nominal">nominal</A>, and
<A HREF = "#Ordinal">ordinal</A>.
<A NAME = "Into"></A>
<P><DT><B>Into</B>
<DD>A <A HREF = "#Mapping">mapping</A> f: <I>X -> Y</I> which only
partially covers Y.
An inverse mapping g: <I>Y -> X</I> may not exist if, for example,
multiple elements <I>x</I> in <I>X</I> produce the same
<I>f(x)</I> in <I>Y.</I>
<PRE>
+----------+ +----------+
| | INTO | Y |
| X | | +----+ |
| | f | |f(X)| |
| | ---> | +----+ |
+----------+ +----------+
</PRE>
<A NAME = "Inverse"></A>
<P><DT><B>Inverse</B>
<DD>A <A HREF = "#Mapping">mapping</A> or function <I>g(y)</I> or
<I>f <SUP>-1</SUP>(y),</I> related to some function <I>f(x)</I>
such that for each <I>x</I> in <I>X</I>:
<BIG>
<PRE> <I>g(f(x)) = x</I> = <I>f<SUP>-1</SUP>(f(x)).</I>
</PRE>
</BIG>
Only functions which are
<A HREF = "#OneToOne">one-to-one</A> can have an inverse.
<A NAME = "Invertible"></A>
<P><DT><B>Invertible</B>
<DD>A <A HREF = "#Mapping">mapping</A> or function which has an
<A HREF = "#Inverse">inverse</A>. A transformation which can be
reversed.
<A NAME = "Involution"></A>
<P><DT><B>Involution</B>
<DD>A type of
<A HREF = "#Mapping">mapping</A> which is a self-inverse.
<P>A cipher which takes
<A HREF = "#Plaintext">plaintext</A> to
<A HREF = "#Ciphertext">ciphertext,</A> and
ciphertext back to plaintext, using the exact same operation.
<A NAME = "Irreducible"></A>
<P><DT><B>Irreducible</B>
<DD>A <A HREF = "#Polynomial">polynomial</A> only evenly divisible
by itself and 1. The polynomial analogy to
<A HREF = "#Integer">integer</A>
<A HREF = "#Prime">primes</A>. Often used to generate a
<I>residue class
<A HREF = "#Field">field</A></I> for polynomial operations.
<P>A polynomial form of the ever-popular
"<A HREF = "#SieveOfEratosthenes">Sieve of Eratosthenes</A>"
can be used to build table of irreducibles through degree 16.
That table can then be used to check any potential irreducible
through degree 32. While slow, this can be a simple, clear
validation of other techniques.
<P>Also see
<A HREF = "#PrimitivePolynomial">primitive polynomial</A>.
<A NAME = "IV"></A>
<P><DT><B>IV</B>
<DD>"Initial value," "initializing value" or "initialization vector."
An external value needed to start off
<A HREF = "#Cipher">cipher</A> operations. Most often associated
with
<A HREF = "#CBC">CBC</A> mode.
<P>An IV often can be seen as a design-specific form of
<A HREF = "#MessageKey">message key</A>. Sometimes, iterative
ciphering under different IV values can provide sufficient keying
to perform the message key function.
<P>Generally, an IV must be accompany the
<A HREF = "#Ciphertext">ciphertext</A>, and so always
expands the ciphertext by the size of the IV.
<A NAME = "Jitterizer"></A>
<P><DT><HR><P><B>Jitterizer</B>
<DD>A particular cryptographic mechanism intended to complicate the
sequence produced by a linear
<A HREF = "#RandomNumberGenerator">random number generator</A>
by deleting elements from the sequence at pseudo-random.
<P>The
name is taken from the use of an oscilloscope on digital circuits,
where a signal which is not "in sync" is said to "jitter."
Mechanisms designed to restore synchronization are called
"synchronizers," so mechanisms designed to cause jitter
can legitimately be called "jitterizers."
<A NAME = "KB"></A>
<P><DT><HR><P><B>KB</B>
<DD>Kilobyte. 2<SUP>10</SUP> or 1024
<A HREF = "#Byte">bytes</A>.
<A NAME = "Kb"></A>
<P><DT><B>Kb</B>
<DD>Kilobit. 2<SUP>10</SUP> or 1024
<A HREF = "#Bit">bits</A>.
<A NAME = "KerckhoffsRequirements"></A>
<P><DT><B>Kerckhoff's Requirements</B>
<DD>General cryptosystem requirements formulated in 1883
(from the Handbook of Applied Cryptography):
<P><OL>
<A NAME = "Kerckhoff1"></A>
<B><LI>The system should be, if not theoretically unbreakable,
unbreakable in practice.</B> (Of course there <I>are</I> no
realized systems which are "theoretically unbreakable," but
there is also little point in using a known
<A HREF = "#Break">breakable</A>
<A HREF = "#Cipher">cipher</A>.)
<A NAME = "Kerckhoff2"></A>
<B><LI>Compromise of the system details should not inconvenience
the correspondents.</B> (Nowadays we generally <I>assume</I>
that the
<A HREF = "#Opponent">Opponent</A> will have full details of the
cipher, since, for a cipher to be widely used, it must be present
at many locations and is therefore likely to be exposed.
We also assume that the Opponent will have some amount of
<A HREF = "#KnownPlaintextAttack">known-plaintext</A> to work
with.)
<A NAME = "Kerckhoff3"></A>
<B><LI>The
<A HREF = "#Key">key</A> should be rememberable without notes and
easily changed.</B> (This is still an issue.
<A HREF = "#Hash">Hashing</A> allows us to use long language
phrases, but the best approach may someday be to have both a
<A HREF = "#Hardware">hardware</A> key card <I>and</I> a key
phrase.)
<A NAME = "Kerckhoff4"></A>
<B><LI>The cryptogram should be transmissible by telegraph.</B>
(This is not very important nowadays, since even
<A HREF = "#Binary">binary</A>
<A HREF = "#Ciphertext">ciphertext</A> can be converted into
<A HREF = "#ASCII">ASCII</A> for transmission
if necessary.)
<A NAME = "Kerckhoff5"></A>
<B><LI>The
<A HREF = "#Encryption">encryption</A> apparatus should be
portable and operable by a single person.</B>
(<A HREF = "#Software">Software</A> encryption approaches this
ideal.)
<A NAME = "Kerckhoff6"></A>
<B><LI>The system should be easy, requiring neither the knowledge
of a long list of rules nor mental strain.</B> (Software
encryption has the <I>potential</I> to approach this, but
often fails to do so. We might think of the need to certify
<A HREF = "#PublicKeyCipher">public keys</A>, which is still
often left up to the user, and thus often does not occur.)
</OL>
<A NAME = "Key"></A>
<P><DT><B>Key</B>
<DD>The general concept of protecting things with a "lock," thus
making those things available only if one has the correct "key."
In a
<A HREF = "#Cipher">cipher</A>, the ability to select a particular
transformation between a
<A HREF = "#Plaintext">plaintext</A> message and a corresponding
<A HREF = "#Ciphertext">ciphertext</A>.
By using a particular key, we can create any one of many different
ciphertexts for the exact same message. And if we know the correct
key, we can transform the ciphertext back into the original message.
By supporting a vast number of different key possibilities (a large
<A HREF = "#Keyspace">keyspace</A>), we hope
to make it impossible for someone to decipher the message by trying
every key in a
<A HREF = "#BruteForceAttack">brute force attack</A>.
<P>In
<A HREF = "#Cryptography">cryptography</A> we have various kinds
of keys, including a User
Key (the key which a user actually remembers), which may be the
same as an Alias Key (the key for an alias file which relates
correspondent names with their individual keys). We may also
have an Individual Key (the key actually used for a particular
correspondent); a
<A HREF = "#MessageKey">Message Key</A> (normally a random value
which differs for each and every message); a
<A HREF = "#RunningKey">Running Key</A> (the
confusion sequence in a
<A HREF = "#StreamCipher">stream cipher</A>, normally produced by a
<A HREF = "#RandomNumberGenerator">random number generator</A>);
and perhaps other forms of key as well.
<P>In general, the value of a cryptographic key is used to
initialize the
<A HREF = "#State">state</A> of a
<A HREF = "#CryptographicMechanism">cryptographic mechanism</A>.
<P>Ideally, a key will be a equiprobable selection among a
huge number of possibilities. This is the fundamental strength of
cryptography, the "needle in a haystack" of false possibilities.
But if a key is in some way <I>not</I> a random selection, but is
instead <I>biased,</I> the most-likely keys can be examined first,
thus reducing the complexity of the search and the effective
<A HREF = "#Keyspace">keyspace</A>.
<P>In most cases, a key will exhibit
<A HREF = "#Diffusion">diffusion</A> across the message; that is,
changing even one bit of a key should change every bit in the message
with probability 0.5. A key with lesser diffusion may succumb to
some sort of
<A HREF = "#DivideAndConquer">divide and conquer</A> attack.
<A NAME = "KeyDistributionProblem"></A>
<P><DT><B>Key Distribution Problem</B>
<DD>The problem of distributing
<A HREF = "#Key">keys</A> to both ends of a
communication path, especially in the case of
<A HREF = "#SecretKeyCipher">secret key ciphers</A>, since
secret keys must be transported and held in absolute secrecy.
Also the problem of distributing vast numbers of keys, if each
user is given a separate key.
<P>Although this problem is supposedly "solved" by the advent
of the
<A HREF = "#PublicKeyCipher">public key cipher</A>, in fact, the
necessary public key validation is almost as difficult as the
original problem. Although public keys can be <I>exposed,</I>
they must represent who they claim to represent, or a "spoofer" or
<A HREF = "#ManInTheMiddleAttack">man-in-the-middle</A> can
operate undetected.
<P>Nor does it make sense to give each individual a separate
secret key, when a related group of people would have access
to the same files anyway. Typically, a particular group has the
same secret key, which will of course be changed when any member
leaves. Typically, each individual would have a secret key for
each group with whom he or she associates.
<A NAME = "Keyspace"></A>
<P><DT><B>Keyspace</B>
<DD>The number of distinct
<A HREF = "#Key">key</A>-selected transformations supported by a
particular
<A HREF = "#Cipher">cipher</A>.
Normally described in terms of
<A HREF = "#Bit">bits</A>, as in the number of bits needed to count
every distinct key. This is also the amount of
<A HREF = "#State">state</A> required to support a state value for
each key. The keyspace in bits is the log<SUB>2</SUB> (the base-2
logarithm) of the number of different keys, provided that all keys
are equally probable.
<P><A HREF = "#Cryptography">Cryptography</A> is based on the idea
that if we have a huge number of keys, and select one at
<A HREF = "#Random">random</A>,
The <A HREF = "#Opponent">Opponents</A> generally must
search about half of the possible keys to find the correct one;
this is a
<A HREF = "#BruteForceAttack">brute force attack</A>.
<P>Although brute force is not the only possible
<A HREF = "#Attack">attack</A>, it is the
one attack which will always exist. Therefore, the ability to
resist a brute force attack is normally the "design strength" of a
cipher. All other attacks should be made even more expensive.
To make a brute force attack expensive, a cipher simply needs a
keyspace large enough to resist such an attack. Of course, a
brute force attack may use new computational technologies such as
DNA or "molecular computation." Currently, 120 bits is large
enough to prevent even unimaginably large uses of such new
technology.
<P>It is probably just as easy to build efficient ciphers which use
huge keys as it is to build ciphers which use small keys, and the
cost of storing huge keys is probably trivial. Thus, large keys
may be useful when this leads to a better cipher design, perhaps
with less key processing. Such keys, however, cannot be considered
better at resisting a brute force attack than a 120-bit key, since
120 bits is already sufficient.
<A NAME = "KeyedSubstitution"></A>
<P><DT><B>Keyed Substitution</B>
<DD>Two
<A HREF = "#SubstitutionTable">substitution tables</A> of the same
size with the same values can differ only in the ordering or
<A HREF = "#Permutation">permutation</A> of the values in the tables.
A huge
<A HREF = "#Key">keying</A> potential exists: The typical "n-bit-wide"
substitution table has 2<SUP>n</SUP> elements, and (2<SUP>n</SUP>)!
("two to the nth factorial") different permutations or key
possibilities. A single 8-bit substitution table has a
<A HREF = "#Keyspace">keyspace</A> of 1648 bits.
<P>A substitution table is keyed by creating a <I>particular</I>
ordering from each different key. This can be accomplished by
<A HREF = "#Shuffle">shuffling</A> the table under the control of a
<A HREF = "#RandomNumberGenerator">random number generator</A>
which is initialized from the key.
<A NAME = "KnownPlaintextAttack"></A>
<P><DT><B>Known Plaintext Attack</B>
<DD>A type of
<A HREF = "#Attack">attack</A> in which the cryptanalyst has
some quantity of related
<A HREF = "#Plaintext">plaintext</A> and
<A HREF = "#Ciphertext">ciphertext</A>. This allows the
ciphering transformation to be examined directly.
<P>A known plaintext attack is especially dangerous to the usual
<A HREF = "#StreamCipher">stream cipher</A> which has an
<A HREF = "#AdditiveCombiner">additive combiner</A>, because the
known plaintext can be "subtracted" from the ciphertext, thus
completely exposing the
<A HREF = "#ConfusionSequence">confusion sequence</A>. This is
the sequence produced by the cryptographic
<A HREF = "#RandomNumberGenerator">random number generator</A>,
and can be used to attack that generator. This sort of attack
can generally be prevented by using a
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution
Combiner</A> instead of the usual additive combiner.
<P>It is surprisingly reasonable that
<A HREF = "#Opponent">The Opponent</A> might well
have some known plaintext (and related ciphertext): This might
be the return address on a letter, a known report, or even some
suspected words. Sometimes the cryptosystem will carry
unauthorized messages like birthday greetings which are then
exposed, due to their apparently innocuous content.
<A NAME = "KolmogorovSmirnov"></A>
<P><DT><B>Kolmogorov-Smirnov</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a
<A HREF = "#GoodnessOfFit">goodness of fit</A> test used to
compare two
<A HREF = "#Distribution">distributions</A> of
<A HREF = "#Ordinal">ordinal</A> data, where measurements may
be re-arranged and placed in order. Also see
<A HREF = "#ChiSquare">chi-square</A>.
<P><UL>
<LI><I>n</I> independent samples are collected and arranged in
numerical order in array <I>X</I> as
<I>x</I>[0]..<I>x</I>[<I>n</I>-1].
<LI><I>S</I>(<I>x</I>[<I>j</I>]) is the fraction of the <I>n</I>
observations which are less than or equal to <I>x</I>[<I>j</I>];
in the ordered array this is just ((<I>j</I>+1)/<I>n</I>).
<LI><I>F</I>(<I>x</I>) is the reference cumulative distribution,
the probability that a random value will be less than or equal to
<I>x</I>. Here we want <I>F</I>(<I>x</I>[<I>j</I>]), the fraction
of the distribution to the left of <I>x</I>[<I>j</I>] which is a
value from the array.
</UL>
<P>The "one-sided" statistics are:
<PRE>
K<SUP>+</SUP> = SQRT(N) * MAX( S(x[j]) - F(x[j]) )
= SQRT(N) * MAX( ((j+1)/n) - F(x[j]) )
K<SUP>-</SUP> = SQRT(N) * MAX( F(x[j]) - S(x[j]) )
= SQRT(N) * MAX( F(x[j]) - (j/n) )
</PRE>
<P>And the "two-sided" KS statistic is:
<PRE>
K = SQRT(N) * MAX( ABS( S(x[j]) - F(x[j]) ) )
= MAX( K<SUP>+</SUP>, K<SUP>-</SUP> )
</PRE>
<P>It appears that the "one-sided" KS distribution is far easier
to compute precisely, and may be preferred on that basis.
<P>See the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/NORMCHIK.HTM#KolSmir">Kolmogorov-Smirnov</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "Latency"></A>
<P><DT><HR><P><B>Latency</B>
<DD>A form of delay. Typically a
<A HREF = "#Hardware">hardware</A> term, latency often
refers to the time need to perform an operation. In the past,
operation delay has largely been dominated by the time taken for
<A HREF = "#Gate">gate</A>
<A HREF = "#Switch">switching</A>
<A HREF = "#Transistor">transistors</A> to turn on and off.
Currently, operation delay is more often dominated by the time it
takes to transport the
<A HREF = "#Electrical">electrical</A> signals to and from gates
on long, thin
<A HREF = "#Conductor">conductors</A>.
<P>The effect of latency on throughput can often be reduced by
<I>pipelining</I> or partitioning the main operation into many
small sub-operations, and running each of those <I>in parallel,</I>
or at the same time. As each operation finishes, that result is
latched and saved temporarily, pending the availability of the
next sub-operation hardware. The result is throughput limited
only by the longest sub-operation instead of the overall operation.
<A NAME = "LatinSquare"></A>
<P><DT><B>Latin Square</B>
<DD>A Latin square of order <I>n</I> is an <I>n</I> by <I>n</I> array
containing symbols from some alphabet of size <I>n</I>, arranged such
that each symbol appears exactly once in each row and exactly once
in each column. Also see
<A HREF = "#LatinSquareCombiner">Latin square combiner</A> and
<A HREF = "#OrthogonalLatinSquares">orthogonal Latin squares</A>.
<PRE>
2 0 1 3
1 3 0 2
0 2 3 1
3 1 2 0
</PRE>
<P>Also see:
<A HREF = "http://www.io.com/~ritter/RES/LATSQ.HTM">Latin Squares:
A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page.
<A NAME = "LatSqComb">
<A NAME = "LatinSquareCombiner"></A>
<P><DT><B>Latin Square Combiner</B>
<DD>A
<A HREF = "#Cryptography">cryptographic</A>
<A HREF = "#Combiner">combining</A>
<A HREF = "#Mechanism">mechanism</A> in which one input selects
a column and the other input selects a row in an existing
<A HREF = "#LatinSquare">Latin square</A>; the value of the
selected element is the combiner result.
<P>A Latin square combiner is inherently
<A HREF = "#Balance">balanced</A>, because
for any particular value of one input, the other input can produce
any possible output value. A Latin square can be treated as an
array of
<A HREF = "#SubstitutionTable">substitution tables</A>, each of
which are invertible, and so can be reversed for use in a suitable
<A HREF = "#Extractor">extractor</A>. As usual
with cryptographic combiners, if we know the output and a
specific one of the inputs, we can extract the value of the
other input.
<P>For example, a tiny Latin square combiner might combine two
2-bit values each having the range zero to three (0..3). That
Latin square would contain four different symbols (here 0, 1, 2,
and 3), and thus be a square of order 4:
<PRE>
2 0 1 3
1 3 0 2
0 2 3 1
3 1 2 0
</PRE>
<P>With this square we can combine the values 0 and 2 by selecting
the top row (row 0) and the third column (column 2) and
returning the value 1.
<P>When extracting, we will know a specific one (but only one)
of the two input values, and the result value. Suppose we know
that row 0 was selected during combining, and that the output
was 1: We can check for the value 1 in each column at row 0 and
find column 2, but this involves searching through all columns.
We can avoid this overhead by creating the row-inverse of the
original Latin square (the inverse of each row), in the
well-known way we would create the inverse of any invertible
substitution. For example, in row 0 of the original square,
selection 0 is the value 2, so, in the row-inverse square,
selection 2 should be the value 0, and so on:
<PRE>
1 2 0 3
2 0 3 1
0 3 1 2
3 1 2 0
</PRE>
<P>Then, knowing we are in row 0, the value 1 is used to select
the second column, returning the unknown original value of 2.
<P>A practical Latin square combiner might combine two bytes,
and thus be a square of order 256, with 65,536 byte entries. In
such a square, each 256-element column and each 256-element row
would contain each of the values from 0 through 255 exactly
once.
<A NAME = "Layer"></A>
<P><DT><B>Layer</B>
<DD>In the context of
<A HREF = "#BlockCipher">block cipher</A> design, a layer is
particular transformation or set of operations applied across the
<A HREF = "#Block">block</A>. In general, a layer is applied
once, and different layers have different transformations.
As opposed to
<A HREF = "#Round">rounds</A>, where a single transformation is
repeated in each round.
<P>Layers can be
<A HREF = "#Confusion">confusion</A> layers (which simply change
the block value),
<A HREF = "#Diffusion">diffusion</A> layers (which propagate
changes across the block in at least one direction) or both.
In some cases it is useful to do multiple operations as a
single layer to avoid the need for internal temporary storage
blocks.
<A NAME = "LFSR"></A>
<P><DT><B>LFSR</B>
<DD><A HREF = "#LinearFeedbackShiftRegister">Linear Feedback Shift
Register</A>.
<A NAME = "Linear"></A>
<P><DT><B>Linear</B>
<DD>Like a line; having an equation of the form
<B><TTY>ax + b</TTY></B> .
<P>There are various ways a relationship can be linear. One way is
to consider <I>a, x,</I> and <I>b</I> as
<A HREF = "#Integer">integers</A>. Another is for them to be
<A HREF = "#Polynomial">polynomial</A> elements of
<A HREF = "#GF2n">GF(2<SUP>n</SUP>)</A>. Yet another is to consider
<B>a</B> to be an <I>n</I> by <I>n</I> matrix, with <B>x</B> and
<B>b</B> as <I>n</I>-element vectors. There are probably various
other ways as well.
<P>Linearity also depends upon our point of view: For example,
integer addition <I>is</I> linear in the integers, but when
expressed as
<A HREF = "#Mod2">mod 2</A> operations, the exact same computation
producing the exact same results is <I>not</I> considered linear.
<P>In cryptography the issue may not be as much one of strict
mathematical linearity as it is the "distance" between a function
and some linear approximation (see
<A HREF = "#BooleanFunctionNonlinearity">Boolean function nonlinearity</A>).
True linear functions are used because they are easy and fast, but
they are also exceedingly weak. Of course
<A HREF = "#XOR">XOR</A> is linear and trivial, yet is used all the
time in arguably
<A HREF = "#Strength">strong</A> ciphers. But a design using linear
<A HREF = "#Component">components</A> must have other nonlinear
components to provide strength.
<A NAME = "LinearComplexity"></A>
<P><DT><B>Linear Complexity</B>
<DD>The length of the shortest
<A HREF = "#LinearFeedbackShiftRegister">Linear Feedback Shift
Register</A> which can produce a given sequence.
<P>Also see:
<A HREF = "http://www.io.com/~ritter/RES/LINCOMPL.HTM">Linear
Complexity: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page.
<A NAME = "LinearFeedbackShiftRegister"></A>
<P><DT><B>Linear Feedback Shift Register</B>
<DD>An efficient structure for producing sequences, often used in
<A HREF = "#RandomNumberGenerator">random number generator</A>
applications.
<P>In an <I>n</I>-element
<A HREF = "#ShiftRegister">shift register</A> (SR), if the last
element is connected to the first element, a set of <I>n</I> values
can circulate around the SR in <I>n</I> steps. But if the values in
two of the elements are combined by
<A HREF = "#ExclusiveOR">exclusive-OR</A> and that result connected
to the first element, it is possible to get an almost-perfect
<A HREF = "#MaximalLength">maximal length</A> sequence of
2<SUP>n</SUP>-1 steps. (The all-zeros state will produce another
all-zeros state, and so the system will "lock up" in a degenerate
cycle.) Because there are only 2<SUP>n</SUP> different states of
<I>n</I> binary values, every state value but one must occur exactly
once, which is a statistically-satisfying result. Moreover, the
values so produced are a perfect permutation of the "counting" numbers
(1..2<SUP>n</SUP>-1).
<PRE>
A Linear Feedback Shift Register
+----+ +----+ +----+ +----+ +----+ "a0"
+-<-| a5 |<---| a4 |<-*-| a3 |<---| a2 |<---| a1 |<--+
| +----+ +----+ | +----+ +----+ +----+ |
| v |
+------------------> (+) ----------------------------+
1 0 1 0 0 1
</PRE>
<P>In the figure we have a LFSR of degree 5, consisting of 5 storage
elements a[5]..a[1] and the feedback computation a[0]=a[5]+a[3].
The stored values may be
<A HREF = "#Bit">bits</A> and the operation (+) addition
<A HREF = "#Mod2">mod 2</A>. A
<A HREF = "#Clock">clock</A> edge will simultaneously shift
all elements left, and load element a[1] with the feedback result as
it was before the clock changed the register. Each SR element is
just a time-delayed replica of the element before it, and here the
element subscript conveniently corresponds to the delay. We can
describe this logically:
<PRE>
a[1][t+1] = a[5][t] + a[3][t];
a[2][t+1] = a[1][t];
a[3][t+1] = a[2][t];
a[4][t+1] = a[3][t];
a[5][t+1] = a[4][t];
</PRE>
<P>Normally the time distinction is ignored, and we can write more
generally, for some feedback
<A HREF = "#Polynomial">polynomial</A> C and
<A HREF = "#State">state</A> polynomial A of degree <I>n</I>:
<PRE>
n
a[0] = SUM c[i]*a[i]
i=1
</PRE>
<P>The feedback polynomial shown here is 101001, a degree-5 poly
running from c[5]..c[0] which is also
<A HREF = "#Irreducible">irreducible</A>. Since we have degree 5
which is a
<A HREF = "#MersennePrime">Mersenne prime</A>, C is also
<A HREF = "#PrimitivePolynomial">primitive</A>. So C produces a
<A HREF = "#MaximalLength">maximal length</A> sequence of exactly
31 steps, provided only that A is not initialized as zero. Whenever
C is irreducible, the reversed polynomial (here 100101) is also
irreducible, and will also produce a maximal length sequence.
<P>LFSR's are often used to generate the
<A HREF = "#ConfusionSequence">confusion sequence</A> for
<A HREF = "#StreamCipher">stream ciphers</A>, but this is very
dangerous: LFSR's are inherently
<A HREF = "#Linear">linear</A> and thus weak. Knowledge of the
feedback polynomial and only
<I>n</I> element values (from
<A HREF = "#KnownPlaintextAttack">known plaintext</A>) is sufficient
to run the sequence backward or forward. And knowledge of only
2<I>n</I> elements is sufficient to develop an unknown feedback
polynomial.
This means that LFSR's should not be used as stream ciphers without
in some way isolating the sequence from analysis. Also see
<A HREF = "#Jitterizer">jitterizer</A> and
<A HREF = "#AdditiveRNG">additive RNG</A>.
<A NAME = "LinearLogicFunction"></A>
<P><DT><B>Linear Logic Function</B>
<DD>A Boolean switching or
<A HREF = "#LogicFunction">logic function</A>
which can be realized using only
<A HREF = "#XOR">XOR</A>
and
<A HREF = "#AND">AND</A>
types of functions, which correspond to addition
<A HREF = "#Mod2">mod 2</A>
and multiplication mod 2, respectively.
<A NAME = "Logic"></A>
<P><DT><B>Logic</B>
<DD>A branch of philosophy related to distinguishing between correct
and incorrect reasoning. Even an invalid argument can sometimes
produce a correct conclusion. But a <I>valid</I> argument must
<I>always</I> produce a correct conclusion.
<P>Also devices which realize symbolic logic, such as
<A HREF = "#Boolean">Boolean</A> logic, a logic of TRUE or FALSE
values. Also see:
<A HREF = "#Subjective">subjective</A>,
<A HREF = "#Objective">objective</A>,
<A HREF = "#Contextual">contextual</A>,
<A HREF = "#Absolute">absolute</A>,
<A HREF = "#InductiveReasoning">inductive reasoning</A>,
<A HREF = "#DeductiveReasoning">deductive reasoning</A>, and
<A HREF = "#Fallacy">fallacy</A>.
<A NAME = "LogicFunction"></A>
<P><DT><B>Logic Function</B>
<DD>Fundamental
<A HREF = "#Digital">digial</A>
<A HREF = "#Logic">logic</A> operations. The fundamental
two-input (<I>dyadic</I>) one-output
<A HREF = "#Boolean">Boolean</A> functions are
<A HREF = "#AND">AND</A>
and
<A HREF = "#OR">OR</A>.
The fundamental one-input (<I>monadic</I>) one-output operation
is
<A HREF = "#NOT">NOT</A>.
These can be used in various ways to build
<A HREF = "#ExclusiveOR">exclusive-OR</A>
(<A HREF = "#XOR">XOR</A>),
which is also widely used as a fundamental function. Here we show
the
<A HREF = "#TruthTable">truth tables</A> for the fundamental
functions:
<PRE>
INPUT NOT
0 1
1 0
INPUT AND OR XOR
0 0 0 0 0
0 1 0 1 1
1 0 0 1 1
1 1 1 1 0
</PRE>
<P>These Boolean values can be stored as a
<A HREF = "#Bit">bit</A>,
and can be associated with 0 or 1, FALSE or TRUE, NO or YES,
etc.
<A NAME = "LSB"></A>
<P><DT><B>LSB</B>
<DD>Least-Significant
<A HREF = "#Bit">Bit</A>. Typically the rightmost bit.
<A NAME = "MSequence"></A>
<P><DT><HR><P><B>M-Sequence</B>
<DD>A <A HREF = "#MaximalLength">maximal length</A>
<A HREF = "#ShiftRegister">shift register</A>
sequence.
<A NAME = "MachineLanguage"></A>
<P><DT><B>Machine Language</B>
<DD>Also "machine code." A
<A HREF = "#Computer">computer</A> program in the form of the
numeric values or "operation codes"
("<A HREF = "#Opcode">opcodes</A>") which the computer
can directly execute as instructions, commands, or "orders."
Thus, the very public
<A HREF = "#Code">code</A> associated with the instructions
available in a particular computer. Also the programming of a
computer at the bit or hexadecimal level, below even assembly
language. Also see
<A HREF = "#SourceCode">source code</A> and
<A HREF = "#ObjectCode">object code</A>.
<A NAME = "MagneticField"></A>
<P><DT><B>Magnetic Field</B>
<DD>The fundamental physical force resulting from moving charges.
Also see:
<A HREF = "#ElectromagneticField">electromagnetic field</A>.
<A NAME = "ManInTheMiddleAttack"></A>
<P><DT><B>Man-in-the-Middle Attack</B>
<DD>The original model used to analyze cryptosystems assumed that an
<A HREF = "#Opponent">Opponent</A> could <I>listen</I> to the
<A HREF = "#Ciphertext">ciphertext</A> traffic, and
perhaps even <I>interfere</I> with it, but not that messages
could be intercepted and completely hidden. Unfortunately, this
is in fact the situation in a store-and-forward
<A HREF = "#Computer">computer</A> network
like the Internet. Routing is not secure on the Internet, and
it is at least conceivable that messages between two people
could be routed through connections on the other side of the
world. This might be exploited to make such messages flow through
a particular computer for special processing.
<P>The Man-in-the-Middle (MITM) Attack is mainly applicable to
<A HREF = "#PublicKeyCipher">public key</A>
systems, and focuses on the idea that many people will
send their public
<A HREF = "#Key">keys</A> on the network. The bad part of this is
a lack of key
<A HREF = "#Authentication">authentication</A>, because the
Man-in-the-Middle
can send a key just as easily, and pretend to be the other end.
Then, if one <I>uses</I> that key, one has secure communication
with The Opponent, instead of the far end. The MITM can receive
a message, decipher it, read it, re-encipher it in the correct
public key, and send it along. In this way, neither end need
know anything is wrong, yet The Opponent is reading the mail.
<P>Perhaps the worst part of this is that a successful MITM attack
does not involve <I>any</I> attack on the actual ciphering. And
this means that all proofs or confidence in the security of
particular ciphering mechanisms is totally irrelevant to the
security of a system which supports MITM attacks.
<P>The way to avoid MITM attacks is to <I>certify</I> public
keys, but this is inconvenient and time-consuming. Unless the
cipher <I>requires</I> keys to be certified, this is rarely done.
The worst part of this is that a successful MITM attack consumes
few resources, need not "break" the cipher itself, and may
provide just the kind of white-collar desktop intelligence a
bureaucracy would love.
<P>It is interesting to note that, regardless of how inconvenient
it may be to share keys for a
<A HREF = "#SecretKeyCipher">secret-key cipher</A>, this is an
inherent authentication which prevents MITM attacks.
<A NAME = "Mapping"></A>
<P><DT><B>Mapping</B>
<DD>Given sets <I>X</I> and <I>Y,</I> and operation <I>f</I>
<PRE>
<I>f</I>: X -> Y ,
</PRE>
the <I>mapping</I> or
<A HREF = "#Function"><I>function</I></A> or <I>transformation</I>
<I>f</I> takes any value in the
<A HREF = "#Domain">domain</A> <I>X</I> into some value in the
<A HREF = "#Range">range</A>, which is contained in <I>Y.</I>
For each element <I>x</I> in <I>X,</I> a mapping associates a single
element <I>y</I> in <I>Y.</I>
Element <I>f(x)</I> in <I>Y</I> is the <I>image</I> of element
<I>x</I> in <I>X.</I>
<UL>
<P><LI>If <I>f(X)</I> covers all elements in <I>Y,</I> <I>f</I>
is a mapping of <I>X</I>
<A HREF = "#Onto"><B>onto</B></A> <I>Y,</I> and is
<A HREF = "#Surjective"><B>surjective</B></A>.
<P><LI>If <I>f(X)</I> only partially covers <I>Y,</I> <I>f</I>
is a mapping of <I>X</I>
<A HREF = "#Into"><B>into</B></A> <I>Y</I>.
</UL>
<P>If no two values of <I>x</I> in <I>X</I> produce the same
result <I>f(x),</I> <I>f</I> is
<A HREF = "#OneToOne"><B>one-to-one</B></A> or
<A HREF = "#Injective"><B>injective</B></A>.
<UL>
<P><LI>If <I>f</I> is both injective and surjective, it is
<B><A HREF = "#OneToOne">one-to-one</A> and
<A HREF = "#Onto">onto</A></B> or
<A HREF = "#Bijective"><B>bijective</B></A>.
<P><LI>If <I>f</I> is bijective, there exists an
<A HREF = "#Inverse"><B>inverse</B></A>
<I>f<SUP> -1</SUP></I> such that:
<PRE>
f<SUP>-1</SUP>(f(x)) = x.
</PRE>
<P><LI>If <I>f</I> is identical with <I>f<SUP> -1</SUP>,</I>
<I>f</I> is an <A HREF = "#Involution"><B>involution</B></A>.
<P><LI>A
<A HREF = "#Permutation">permutation</A> of <I>X</I> is a
bijection from <I>X</I> to <I>X.</I>
</UL>
<A NAME = "MarkovProcess"></A>
<P><DT><B>Markov Process</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a
<A HREF = "#Stochastic">stochastic</A> (random)
<A HREF = "#Process">process</A> (function) in which all possible
outcomes are defined by the current
<A HREF = "#State">state</A>, independent of all previous states.
Also see:
<A HREF = "#StationaryProcess">stationary process</A>.
<A NAME = "MathematicalCryptography"></A>
<P><DT><B>Mathematical Cryptography</B>
<DD>
<A HREF = "#Cryptography">Cryptography</A> based on mathematical
operations, such as taking extremely large values to extremely
large powers,
<A HREF = "#Modulo">modulo</A> the product of two
<A HREF = "#Prime">primes</A>. Normally heavily
involved with number theory. As opposed to
<A HREF = "#MechanisticCryptography">mechanistic cryptography</A>.
<P>There are some problems with a strictly mathematical approach
to cryptography:
<OL>
<LI>Mathematical symbology has evolved for concise expression.
It is thus not "isomorphic" to the complexity of the implementation,
and so is not a good vehicle for the design-time trade-off of
computation versus
<A HREF = "#Strength">strength</A>.
<LI>Most mathematical operations are useful or "beautiful"
relationships specifically intended to support understanding in
either direction, as opposed to relationships which might be
particularly difficult to reverse or infer.
So when using the traditional operations for cryptography, we
must first defeat the very properties which made these operations
so valuable in their normal use.
<LI>Mathematics has evolved to produce, describe and expose
<I>structure</I>, as in useful or "beautiful" large-scale
relationships and groupings. But, in a sense, relationships and
groupings are the exact opposite of the fine-grained completely
<A HREF = "#Random">random</A> mappings that cryptography would
like to see. Such mappings are awkward to express mathematically,
and contain little of the structure which mathematics is intended
to describe.
<LI>There may be an ingrained tendency in math practitioners, based
on long practice, to <I>construct</I> math-like relationships, and
such relationships are not desirable in this application. So when
using math to construct cryptography, we may first have to defeat
our own training and tendencies to group, understand and simplify.
</OL>
<P>On the other hand, mathematics is <I>irreplaceable</I> in
providing the tools to pick out and describe <I>structure</I> in
apparently strong cipher designs. Mathematics can identify
specific strength problems, and evaluate potential fixes. But
there appears to be no real hope of evaluating strength with
respect to <I>every possible</I> attack, even using mathematics.
<P>Although mathematical cryptography has held out the promise
of providing <I>provable</I>
<A HREF = "#Security">security</A>, in over 50 years of work,
<B>no</B> practical cipher has been generally accepted as having
<I>proven</I>
<A HREF = "#Strength">strength</A>. See, for example:
<A HREF = "#OneTimePad">one time pad</A>.
<A NAME = "MB"></A>
<P><DT><B>MB</B>
<DD>Megabyte. 2<SUP>20</SUP> or 1,048,576
<A HREF = "#Byte">bytes</A>.
<A NAME = "Mb"></A>
<P><DT><B>Mb</B>
<DD>Megabit. 2<SUP>20</SUP> or 1,048,576
<A HREF = "#Bit">bits</A>.
<A NAME = "MaximalLength"></A>
<P><DT><B>Maximal Length</B>
<DD>A <A HREF = "#LinearFeedbackShiftRegister">linear feedback shift
register</A> (LFSR) sequence of 2<SUP>n</SUP>-1 steps (assuming a
bit-wide
<A HREF = "#ShiftRegister">shift register</A> of <I>n</I> bits. This
means that every
<A HREF = "#Binary">binary</A> value the register can hold, except
zero, will occur on some step, and then not occur again until all
other values have been produced. A maximal-length LFSR can be
considered a binary counter in which the count values have been
<A HREF = "#Shuffle">shuffled</A> or
<A HREF = "#Encipher">enciphered</A>. And while the sequence from
a normal binary counter is perfectly
<A HREF = "#Balance">balanced</A>,
the sequence from a maximal-length LFSR is <I>almost</I> perfectly
balanced. Also see
<A HREF = "#MSequence">M-sequence</A>.
<A NAME = "Mechanism"></A>
<P><DT><B>Mechanism</B>
<DD>The logical concept of a machine, which may be realized either
as a physical machine, or as a sequence of logical commands
executed by a physical machine.
<P>A mechanism can be seen as a process or an implementation for
performing that process (such as
<A HREF = "#Electronic">electronic</A>
<A HREF = "#Hardware">hardware</A>,
<A HREF = "#Computer">computer</A>
<A HREF = "#Software">software</A>, hybrids, or the like).
<A NAME = "MechanisticCryptography"></A>
<P><DT><B>Mechanistic Cryptography</B>
<DD>
<A HREF = "#Cryptography">Cryptography</A> based on
<A HREF = "#Mechanism">mechanisms</A>, or machines. As opposed to
<A HREF = "#MathematicalCryptography">mathematical cryptography</A>.
<P>Although perhaps looked down upon by those of the mathematical
cryptography persuasion, mechanistic cryptography certainly does
use mathematics to design and predict performance. But rather
than being restricted to arithmetic operations, mechanistic
cryptography tends to use a wide variety of mechanically-simple
<A HREF = "#Component">components</A> which may not have concise
mathematical descriptions. Rather than simply implementing a
<A HREF = "#System">system</A> of math expressions, complexity is
constructed from the various efficient components available to
digital computation.
<A NAME = "MersennePrime"></A>
<P><DT><B>Mersenne Prime</B>
<DD>A
<A HREF = "#Prime">prime</A> p for which
<NOBR>2<SUP>p</SUP> - 1</NOBR> is also prime. For example, 5 is a
Mersenne prime because
<NOBR>2<SUP>5</SUP> - 1 = 31,</NOBR> and 31 is prime. For
<A HREF = "#Mod2Polynomial">mod 2 polynomials</A> of
Mersenne prime degree, <I>every</I>
<A HREF = "#Irreducible">irreducible</A> is also
<A HREF = "#Primitive">primitive</A>.
<PRE>
Mersenne Primes:
2 107 9689 216091
3 127 9941 756839
5 521 11213 859433
7 607 19937 1257787
13 1279 21701 1398269
17 2203 23209
19 2281 44497
31 3217 86243
61 4253 110503
89 4423 132049
</PRE>
<A NAME = "MessageDigest"></A>
<P><DT><B>Message Digest</B>
<DD>A small value which represents an entire message for
purposes of authentication; a
<A HREF = "#Hash">hash</A>.
<A NAME = "MessageKey"></A>
<P><DT><B>Message Key</B>
<DD>A
<A HREF = "#Key">key</A> transported with the message and used for
deciphering the message. (The idea of a
"<A HREF = "#SessionKey">session key</A>" is very
similar, but lasts across multiple messages.)
<P>Normally, the message key is a large
<A HREF = "#Random">random</A> value which becomes the key for
ciphering the data in a single message. Normally, the message key
itself is enciphered under the User Key or other key for that link.
The receiving end first deciphers the message key, then uses that
value as the key for deciphering the message data. Alternately, the
random value itself may be sent unenciphered, but is then enciphered
or hashed (under a keyed cryptographic hash) to produce a value used
as the data ciphering key.
<P>The message key assures that the actual data is ciphered under a
key which is an arbitrary selection from a huge number of possible
keys; it therefore prevents weakness due to user key selection.
A message key is used exactly once, no matter how many times the
same message is enciphered, so at most, a successful attack on a
message key exposes just one message. The internal construction of
a random message key cannot be controlled by a user, and thus
prevents all
<A HREF = "#Attack">attacks</A> based
on repeated ciphering under a single key. To the extent that the
message key value really is random and is never exposed on either
end, the message key is much more easily protected than ordinary
text (see
<A HREF = "#IdealSecrecy">ideal secrecy</A>). In a sense, a
message key is the higher-level concept of an
<A HREF = "#IV">IV</A>, which is necessarily distinct for each
particular design.
<A NAME = "MITM"></A>
<P><DT><B>MITM</B>
<DD>
<A HREF = "#ManInTheMiddleAttack">Man In The Middle</A>.
<A NAME = "Mixing"></A>
<P><DT><B>Mixing</B>
<DD>The act of transforming multiple input values into one or
more output values, such that changing any input value will
change the output value. There is no implication that the
result must be
<A HREF = "#Balance">balanced</A>, but effective mixing may need
to be, in some sense,
<A HREF = "#Complete">complete</A>. Also see
<A HREF = "#MixingCipher">Mixing Cipher</A>,
<A HREF = "#Combiner">combiner</A>,
<A HREF = "#LatinSquareCombiner">Latin square combiner</A>, and
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A>.
<A NAME = "MixingCipher"></A>
<P><DT><B>Mixing Cipher</B>
<DD>A
<A HREF = "#BlockCipher">block cipher</A> based on
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A>
of small elements in
<A HREF = "#FFT">FFT</A>-like or
<A HREF = "#FWT">FWT</A>-like
<A HREF = "#Mixing">mixing</A> patterns.
<P>Below, we have a toy 32-bit-block Mixing Cipher.
<A HREF = "#Plaintext">Plaintext</A> at the top is transformed into
<A HREF = "#Ciphertext">ciphertext</A> at the bottom.
Each "S" is an 8-bit
<A HREF = "#SubstitutionTable">substitution table</A>, and
each table (and now each mixing operation also) is individually
<A HREF = "#Key">keyed</A>.
<P>Horizontal lines connect elements which are to be mixed
together: Each *---* represents a single
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A> or BBM.
Each BBM takes two elements, mixes them, and returns two mixed
values. The mixed results then replace the original values in the
selected positions just like the "butterfly" operations used in some
<A HREF = "#FFT">FFT</A>'s.
<PRE>
A 32-Bit Mixing Cipher
| | | | <- Input Block (Plaintext)
S S S S <- <A HREF = "#Fencing">Fencing</A>
| | | |
*---* *---* <- 2 BBM Mixings
| | | |
*-------* | <- 1 BBM Mixing
| *-------* <- 1 BBM Mixing
| | | |
S S S S <- Fencing
| | | |
*-------* |
| *-------*
| | | |
*---* *---*
| | | |
S S S S <- Fencing
| | | | <- Output Block (Ciphertext)
</PRE>
<P>By mixing each element with another, and then each pair with
another pair and so on, every element is eventually mixed with every
other element. Each BBM mixing is
<A HREF = "#Dyadic">dyadic</A>, so each "sub-level" is a mixing of
twice as many elements as the sublevel before it. A block of <I>n</I>
elements is thus fully mixed in <NOBR>log<SUB>2</SUB> <I>n</I></NOBR>
sublevels, and each result element is equally influenced equally by
each and every input element.
<P>The pattern of these mixings is exactly like some implementations
of the FFT, and thus the term "FFT-style." Also see the articles in the
<A HREF = "http://www.io.com/~ritter/CRYPHTML.HTM#MixTech">Mixing
Ciphers</A> section on the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> pages.
<A NAME = "Mod2"></A>
<P><DT><B>Mod 2</B>
<DD>The
<A HREF = "#Field">field</A> formed from the set of integers {0,1}
with operations + and * producing the remainder after dividing by
<A HREF = "#Congruence">modulus</A> 2. Thus:
<PRE>
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0
1 + 1 + 1 = 1
0 * 0 = 0
0 * 1 = 0
1 * 0 = 0
1 * 1 = 1
</PRE>
Subtraction mod 2 is the same as addition mod 2.
The operations + and * can also be considered the
<A HREF = "#LogicFunction">logic functions</A>
<A HREF = "#XOR">XOR</A> and
<A HREF = "#AND">AND</A> respectively.
<A NAME = "Mod2Polynomial"></A>
<P><DT><B>Mod 2 Polynomial</B>
<DD>A <A HREF = "#Polynomial">polynomial</A> in which the
coefficients are taken
<A HREF = "#Mod2">mod 2</A>. The four arithmetic operations
addition, subtraction, multiplication and division are supported.
As usual, mod 2 subtraction is the same as mod 2 addition. Each
column of coefficients is added separately, without "carrys"
to an adjacent column:
<PRE>
Addition and Subtraction:
1 0 1 1
+ 0 1 0 1
+ 1 1 0 0
---------
0 0 1 0
Multiplication:
1 0 1 1
* 1 1 0 0
----------
0
0
1 0 1 1
1 0 1 1
---------------
1 1 1 0 1 0 0
</PRE>
Polynomial multiplication is <I>not</I> the same as repeated
polynomial addition. But there is a fast approach to squaring
mod 2 polynomials:
<PRE>
a b c d
a b c d
------------
ad bd cd dd
ac bc cc dc
ab bb cb db
aa ba ca da
----------------------
a 0 b 0 c 0 d
</PRE>
To square a mod 2 polynomial, all we have to do is "insert" a zero
between every column. Note that aa = a for a = 0 or a = 1,
and ab = ba, so either 0 + 0 = 0 or 1 + 1 = 0.
<PRE>
Division:
1 0 1 1
----------------
1 1 0 0 ) 1 1 1 0 1 0 0
1 1 0 0
---------
1 0 1 0
1 1 0 0
---------
1 1 0 0
1 1 0 0
---------
0
</PRE>
<P>The decision about whether the divisor "goes into" the dividend
is based exclusively on the most-significant (leftmost) digit.
This makes polynomial division far easier than integer division.
<P>Mod 2 polynomials behave much like
<A HREF = "#Integer">integers</A> in that one
polynomial may or may not divide another without remainder. This
means that we can expect to find analogies to integer
"<A HREF = "#Prime">primes</A>,"
which we call
<A HREF = "#Irreducible"><I>irreducible</I></A> polynomials.
<P>Mod 2 polynomials do not constitute a
<A HREF = "#Field">field</A>; clearly,
the size of a multiplication is unbounded. However, a
<A HREF = "#FiniteField">finite field</A> of polynomials can be
created by choosing an irreducible modulus polynomial, thus
producing a Galois field
<A HREF = "#GF2n">GF 2<SUP>n</SUP></A>.
<A NAME = "Mode"></A>
<P><DT><B>Mode</B>
<DD>One possibility is:
<A HREF = "#BlockCipher">block cipher</A>
<A HREF = "#OperatingMode">operating mode</A>.
<A NAME = "Modulo"></A>
<P><DT><B>Modulo</B>
<DD>Casually, the remainder after an integer division by a
modulus; see
<A HREF = "#Congruence">congruence</A>.
When the modulus is
<A HREF = "#Prime">prime</A>, this may generate a useful
<A HREF = "#Field">field</A>.
<A NAME = "Monadic"></A>
<P><DT><B>Monadic</B>
<DD>Relating to <I>monad</I>, which is Greek for single or one.
In particular, a function with a single input or argument, also
called
<A HREF = "#Unary">unary</A>.
Also see:
<A HREF = "#Dyadic">dyadic</A>.
<A NAME = "MonoalphabeticSubstitution"></A>
<P><DT><B>Monoalphabetic Substitution</B>
<DD>Substitution using a single
<A HREF = "#Alphabet">alphabet</A>. Also called
<A HREF = "#SimpleSubstitution">simple substitution</A>.
As opposed to
<A HREF = "#PolyalphabeticSubstitution">Polyalphabetic
Substitution</A>.
<A NAME = "Monographic"></A>
<P><DT><B>Monographic</B>
<DD>Greek for "single letter." A
<A HREF = "#Cipher">cipher</A> which translates one
<A HREF = "#Plaintext">plaintext</A> symbol at a time into
<A HREF = "#Ciphertext">ciphertext</A>.
As opposed to
<A HREF = "#Polygraphic">polygraphic</A>; also see
<A HREF = "#Homophonic">homophonic</A> and
<A HREF = "#Polyphonic">polyphonic</A>.
<A NAME = "MultipleEncryption"></A>
<P><DT><B>Multiple Encryption</B>
<DD>
<A HREF = "#Encipher">Enciphering</A> or
<A HREF = "#Encryption">encrypting</A> a message more than once.
This usually has the
<A HREF = "#Strength">strength</A>
advantage of producing a very random-like
<A HREF = "#Ciphertext">ciphertext</A>
from the first pass, which is of course the
"<A HREF = "#Plaintext">plaintext</A>" for the next pass.
<P>Multiple encryption using different
<A HREF = "#Key">keys</A> can be a way to increase strength.
And multiple encryption using different
<A HREF = "#Cipher">ciphers</A> can reduce the probability of
using a single cipher which has been
<A HREF = "#Break">broken</A> in secret. In both cases, the cost
is additional ciphering operations.
<P>Unfortunately, multiple encryption using just <I>two</I> (2)
ciphers may not be much advantage: If we assume The Opponents know
which ciphers are used, they can manipulate <I>both</I> the plaintext
<I>and</I> the ciphertext to search for a match
(a "meet-in-the-middle"
<A HREF = "#Attack">attack</A> strategy). One way to avoid this is
to use <I>three</I> (3) cipherings, as in Triple DES.
<P>Multiple encryption also can be <I>dangerous</I>, if a single
cipher is used with the same key each time. Some ciphers are
<A HREF = "#Involution">involutions</A> which both encipher and
decipher with the same process; these ciphers will
<B>de</B>cipher a message if it is
<B>en</B>ciphered a second time under the same key. This
is typical of classic additive synchronous stream ciphers, as it
avoids the need to have separate encipher and decipher operations.
But it also can occur with block ciphers operated in
stream-cipher-like modes such as
<A HREF = "#OFB">OFB</A>, for exactly the same reason.
<A NAME = "Nomenclator"></A>
<P><DT><HR><P><B>Nomenclator</B>
<DD>Originally, a list of transformations from <I>names</I> to
symbols or numbers for diplomatic communications. Later, typically
a list of transformations from names,
<A HREF = "#Polygraphic">polygraphic</A> syllables, and
<A HREF = "#Monographic">monographic</A> letters, to numbers.
Usually the monographic transformations had multiple or
<A HREF = "#Homophonic">homophonic</A> alternatives for
frequently-used letters. Generally smaller than a
<A HREF = "#Codebook">codebook</A>, due to the use of the
syllables instead of a comprehensive list of phrases.
A sort of early manual
<A HREF = "#Cipher">cipher</A> with some characteristics of a
<A HREF = "#Code">code</A>, that operated like a
<A HREF = "#Codebook">codebook</A>.
<A NAME = "Nominal"></A>
<P><DT><B>Nominal</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, measurements which are in
categories or "bins." Also see:
<A HREF = "#Ordinal">ordinal</A>, and
<A HREF = "#Interval">interval</A>.
<A NAME = "Nonlinearity"></A>
<P><DT><B>Nonlinearity</B>
<DD>The extent to which a function is not
<A HREF = "#Linear">linear</A>. See
<A HREF = "#BooleanFunctionNonlinearity">Boolean function nonlinearity</A>.
<A NAME = "NOT"></A>
<P><DT><B>NOT</B>
<DD>A Boolean
<A HREF = "#LogicFunction">logic function</A> which is the
"complement" or the
<A HREF = "#Mod2">mod 2</A> addition of 1.
<A NAME = "NullHypothesis"></A>
<P><DT><B>Null Hypothesis</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the particular statement or
hypothesis <I>H</I><SUB>0</SUB> which is accepted unless a
<A HREF = "#Statistic">statistic</A> testing that hypothesis
produces evidence to the contrary. Normally, the null hypothesis
is accepted when the associated statistical test indicates
"nothing unusual found."
<P>The logically contrary
<A HREF = "#AlternativeHypothesis">alternative hypothesis</A>
<I>H</I><SUB>1</SUB> is sometimes formulated with the specific
<I>hope</I> that something unusual <I>will</I> be found, but this
can be very tricky to get right. Many statistical tests (such as
<A HREF = "#GoodnessOfFit">goodness-of-fit</A> tests) can only
indicate whether something matches what we expect, or does not.
But any number of things can cause a mismatch, including a
fundamentally flawed experiment. A simple mismatch does not
normally imply the presence of a particular quality.
<P>Even in the best possible situation,
<A HREF = "#Random">random</A>
<A HREF = "#Sample">sampling</A> will produce a range or
<A HREF = "#Distribution">distribution</A> of test statistic values.
Often, even the worst possible statistic value can be produced by
an unlucky sampling of the best possible data. It is thus important
to know what distribution to expect because of the sampling alone,
so if we find a <I>different</I> distribution, that will be evidence
supporting the alternative hypothesis <I>H</I><SUB>1</SUB>.
<P>If we collect enough statistic values, we should see them occur
in the ideal distribution for that particular statistic. So if we
call the upper 5 percent of the distribution "failure" (this is the
<A HREF = "#Significance">significance</A> level) we not only
<I>expect</I> but in fact <I>require</I> such "failure" to occur
about 1 time in 20. If it does not, we will in fact have detected
something unusual, something which might even indicate problems in
the experimental design.
<P>If we have only a small number of samples, and do not run repeated
trials, a relatively few chance events can produce an improbable
statistic value, which might cause us to reject a valid null
hypothesis, and so commit a
<A HREF = "#TypeIError">type I error</A>.
<P>On the other hand, if there <I>is</I> a systematic deviation in
the underlying distribution, only a very specific type of random
sampling could mask that problem. With few samples and trials,
though, the chance random masking of a systematic problem is still
<I>possible,</I> and could lead to a
<A HREF = "#TypeIIError">type II error</A>.
<A NAME = "ObjectCode"></A>
<P><DT><HR><P><B>Object Code</B>
<DD>Typically,
<A HREF = "#MachineLanguage">machine language</A> instructions
represented in a form which can be "linked" with other
routines. Also see
<A HREF = "#SourceCode">source code</A>.
<A NAME = "Objective"></A>
<P><DT><B>Objective</B>
<DD>In the study of
<A HREF = "#Logic">logic</A>, reality observed without interpretation.
As opposed to
<A HREF = "#Subjective">subjective</A> or interpreted reality.
Alternately, a goal.
<A NAME = "Octal"></A>
<P><DT><B>Octal</B>
<DD>Base 8: The numerical representation in which each digit has an
<A HREF = "#Alphabet">alphabet</A> of eight symbols, generally
0 through 7.
<P>Somewhat easier to learn than
<A HREF = "#Hexadecimal">hexadecimal</A>, since no new numeric
symbols are needed, but octal can only represent three
<A HREF = "#Bit">bits</A> at a
time. This generally means that the leading digit will not take all
values, and that means that the representation of the top part of
two concatenated values will differ from its representation alone,
which can be confusing. Also see:
<A HREF = "#Binary">binary</A> and
<A HREF = "#Decimal">decimal</A>.
<A NAME = "Octave"></A>
<P><DT><B>Octave</B>
<DD>A
<A HREF = "#Frequency">frequency</A>
ratio of 2:1. From an 8-step musical scale.
<A NAME = "OFB"></A>
<P><DT><B>OFB</B>
<DD>OFB or Output FeedBack is an
<A HREF = "#OperatingMode">operating mode</A> for a
<A HREF = "#BlockCipher">block cipher</A>.
<P>OFB is closely related to
<A HREF = "#CFB">CFB</A>, and is intended to provide some of
the characteristics of a
<A HREF = "#StreamCipher">stream cipher</A> from a block cipher.
OFB is a way of using a block cipher to form a
<A HREF = "#RandomNumberGenerator">random number generator</A>.
The resulting
<A HREF = "#PseudoRandom">pseudorandom</A>
<A HREF = "#ConfusionSequence">confusion sequence</A> can be
<A HREF = "#Combiner">combined</A> with data as in the usual
stream cipher.
<P>OFB assumes a
<A HREF = "#ShiftRegister">shift register</A> of the block cipher
block size. An
<A HREF = "#IV">IV</A> or initial value first fills the register,
and then is ciphered. Part of the result, often just a single
<A HREF = "#Byte">byte</A>, is used to cipher data, and <I>also</I> is
shifted into the register. The resulting new register value is
ciphered, producing another confusion value for use in stream
ciphering.
<P>One disadvantage of this, of course, is the need for a full
block-wide ciphering operation, typically for each data byte
ciphered. The advantage is the ability to cipher individual
characters, instead of requiring accumulation into a block
before processing.
<A NAME = "OneTimePad"></A>
<P><DT><B>One Time Pad</B>
<DD>The term "one time pad" (OTP) is rather casually used for two
fundamentally different types of
<A HREF = "#Cipher">cipher</A>:
<P><OL>
<B><LI>The Theoretical One Time Pad:</B> a theoretical
<A HREF = "#Random">random</A>
source produces values which are
<A HREF = "#Combiner">combined</A> with data to produce
<A HREF = "#Ciphertext">ciphertext</A>. In a theoretical
discussion of this concept, we can simply <B>assume</B>
<I>perfect</I> randomness in the source, and this
<B>assumption</B> supports a mathematical proof that
the cipher is unbreakable. But the theoretical result applies
to reality <B>only if we can prove the
assumption is valid</B> in reality. Unfortunately, we
cannot do this, because <I>provably</I> perfect randomness
apparently cannot be attained in practice. So the theoretical
OTP does not really exist, except as a goal.
<P><B><LI>The Realized One Time Pad:</B> a
<A HREF = "#ReallyRandom">really random</A> source
produces values which are combined with data to produce
ciphertext. But because we can neither <I>assume</I> nor
<I>prove</I> perfect, theoretical-class randomness in any
real generator, this cipher does not have the mathematical
proof of the theoretical system. Thus, a realized one time
pad is <B>NOT <I>proven</I></B> unbreakable, although it may
in fact <I>be</I> unbreakable in practice. In this sense,
it is much like other realized ciphers.
</OL>
<P>A realized one time pad (OTP) is essentially a
<A HREF = "#StreamCipher">stream cipher</A> with a
<A HREF = "#ReallyRandom">really random</A>
<A HREF = "#ConfusionSequence">confusion sequence</A>
used exactly once. The confusion sequence is the
<A HREF = "#Key">key</A>, and it is as long as the data. Since
this amount of keying material can be awkward to transfer and keep,
we often see "pseudo" one-time pad designs which attempt to correct
this deficiency. Normally, the point is to achieve the theoretical
advantages of a one-time pad without the costs; the problem with
this is that the one-time pad theory of strength no longer applies.
These variations are best seen as classic stream cipher designs.
<P>In a realized one time pad, the confusion sequence must be
<I>unpredictable</I> (not generated from a small key value) and
must be transported to the far end and held at both locations in
absolute secrecy like any other secret key. But where a normal
secret key might range perhaps from 16 bytes to 160 bytes, there
must be as much OTP sequence as there will be data (which might
well be megabytes). And a normal secret key could itself be sent
under a key (as in a
<A HREF = "#MessageKey">message key</A> or under a
<A HREF = "#PublicKeyCipher">public key</A>). But an OTP sequence
<I>cannot</I> be sent under a key, since this would make the OTP as
weak as the key, in which case we might as well use a normal cipher.
All this implies very significant inconveniences, costs, and risks,
well beyond what one would at first expect, so even the realized
one time pad is generally considered <B>impractical</B>, except in
very special situations.
<P>In a realized one time pad, the confusion sequence itself must be
<A HREF = "#Random">random</A> for, if not, it will be somewhat
predictable. And, although we have a great many
<A HREF = "#Statistics">statistical</A> randomness tests,
there is no test which can <I>certify</I> a sequence as either
random or unpredictable. This means that a sequence which we assume
to be random may <B>not</B> be the unpredictable sequence we need,
and we can never know for sure. (This might be considered an
argument for using a
<A HREF = "#Combiner">combiner</A> with strength, such as a
<A HREF = "#LatinSquareCombiner">Latin square</A> or
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution</A>.)
In practice, the much touted "mathematically proven unbreakability"
of the one time pad depends upon an assumption of randomness and
unpredictability which we can neither test nor prove.
<P>The one time pad sometimes seems to have yet another level of
strength above the usual stream cipher, the ever-increasing amount
of "unpredictability" or
<A HREF = "#Entropy">entropy</A> in the
<A HREF = "#ConfusionSequence">confusion sequence</A>, leading to
an indefinite
<A HREF = "#UnicityDistance">unicity distance</A>.
In contrast, the typical
<A HREF = "#StreamCipher">stream cipher</A> will produce a long
sequence from a relatively small amount of initial
<A HREF = "#State">state</A>, and it can be argued that the entropy
of an
<A HREF = "#RNG">RNG</A> is just the number of bits in its initial
state. In theory, this might mean that the initial state or
<A HREF = "#Key">key</A> used in the stream cipher could be
identified after somewhat more than that same amount of data had been
enciphered. But it is also perfectly possible for an unsuspected
problem to occur in a really-random generator, and then the more
sequence generated, the more apparent and useful that problem might
be to an
<A HREF = "#Opponent">Opponent</A>.
<P>Nor does even a theoretical one time pad imply unconditional
security: Consider <I>A</I> sending the same message to <I>B</I>
and <I>C,</I> using, of course, two <I>different</I> pads. Now,
suppose the Opponents can acquire plaintext from <I>B</I> and
intercept the ciphertext to <I>C</I>. If the system is using the
usual
<A HREF = "#AdditiveCombiner">additive combiner</A>, the Opponents
can <I>reconstruct the pad</I> between <I>A</I> and <I>C</I>.
Now they can send <I>C</I> any message they want, and encipher it
under the correct pad. And <I>C</I> will never question such a
message, since <I>everyone knows</I> that a one time pad provides
"absolute" security as long as the pad is kept secure. Note that
both <I>A</I> and <I>C</I> have done this, and they are the only
ones who had that pad.
<P>Various companies offer one time pad programs, and sometimes
also the keying or "pad" material.
<A NAME = "OneToOne"></A>
<P><DT><B>One-To-One</B>
<DD><A HREF = "#Injective">Injective</A>. A
<A HREF = "#Mapping">mapping</A> f: <I>X -> Y</I> where no two
values <I>x</I> in <I>X</I> produce the same result
<I>f(x)</I> in <I>Y.</I>
A one-to-one mapping is invertible for those values of <I>X</I>
which produce unique results <I>f(x)</I>, but there may not be a
full inverse mapping g: <I>Y -> X</I>.
<A NAME = "OneWayDiffusion"></A>
<P><DT><B>One Way Diffusion</B>
<DD>In the context of a
<A HREF = "#BlockCipher">block cipher</A>, a one way
<A HREF = "#Diffusion">diffusion</A>
<A HREF = "#Layer">layer</A> will
carry any changes in the data
<A HREF = "#Block">block</A> in a direction from one side
of the block to the other, but not in the opposite direction.
This is the usual situation for fast, effective diffusion layer
realizations.
<A NAME = "Onto"></A>
<P><DT><B>Onto</B>
<DD><A HREF = "#Surjective">Surjective</A>.
A <A HREF = "#Mapping">mapping</A> f: <I>X -> Y</I> where
<I>f(x)</I> covers all elements in <I>Y.</I>
Not necessarily invertible, since multiple elements
<I>x</I> in <I>X</I> could produce the same <I>f(x)</I> in <I>Y.</I>
<PRE>
+----------+ +----------+
| | ONTO | |
| X | | Y = f(X) |
| | f | |
| | ---> | |
+----------+ +----------+
</PRE>
<A NAME = "Opcode"></A>
<P><DT><B>Opcode</B>
<DD>Operation
<A HREF = "#Code">code</A>: a value which selects one operation from
among a set of possible operations. This is an encoding of functions
as values. These values may be interpreted by a
<A HREF = "#Computer">computer</A> to perform the selected operations
in their given sequence and produce a desired result.
Also see:
<A HREF = "#Software">software</A> and
<A HREF = "#Hardware">hardware</A>.
<A NAME = "OperatingMode"></A>
<P><DT><B>Operating Mode</B>
<DD>With respect to
<A HREF = "#BlockCipher">block ciphers</A>, a way to handle messages
which are larger than the defined
<A HREF = "#Block">block</A> size. Usually this means one of the
four block cipher "applications" defined for use with
<A HREF = "#DES">DES</A>:
<UL>
<LI><A HREF = "#ECB">ECB</A> or Electronic CodeBook;
<LI><A HREF = "#CBC">CBC</A> or Cipher Block Chaining;
<LI><A HREF = "#CFB">CFB</A> or Ciphertext FeedBack; and
<LI><A HREF = "#OFB">OFB</A> or Output FeedBack.
</UL>
<P>It can be argued that block cipher operating modes are
<A HREF = "#StreamCipher">stream "meta-ciphers"</A> in which the
streamed transformation is of full block cipher width, instead of
the usual stream cipher
<A HREF = "#Bit">bit</A>- or
<A HREF = "#Byte">byte</A>-width transformations.
<A NAME = "Opponent"></A>
<P><DT><B>Opponent</B>
<DD>A term used by some
<A HREF = "#Cryptographer">cryptographers</A> to refer to the
opposing
<A HREF = "#Cryptanalyst">cryptanalyst</A> or opposing team.
Sometimes used in preference to "the enemy."
<A NAME = "OR"></A>
<P><DT><B>OR</B>
<DD>A Boolean
<A HREF = "#LogicFunction">logic function</A> which is also
nonlinear under
<A HREF = "#Mod2">mod 2</A> addition.
<A NAME = "Order"></A>
<P><DT><B>Order</B>
<DD>In mathematics, typically the number of elements in a structure,
or the number of steps required to traverse a cyclic structure.
<A NAME = "Ordinal"></A>
<P><DT><B>Ordinal</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, measurements which are
ordered from smallest to largest. Also see:
<A HREF = "#Nominal">nominal</A>, and
<A HREF = "#Interval">interval</A>.
<A NAME = "Orthogonal"></A>
<P><DT><B>Orthogonal</B>
<DD>At right angles; on an independent dimension. Two structures
which each express an independent dimension.
<A NAME = "OrthogonalLatinSquares"></A>
<P><DT><B>Orthogonal Latin Squares</B>
<DD>Two
<A HREF = "#LatinSquare">Latin squares</A> of order <I>n,</I> which,
when superimposed, form each of the <I>n</I><SUP>2</SUP> possible
ordered pairs of <I>n</I> symbols exactly once. At most, <I>n</I>-1
Latin squares may be mutually orthogonal.
<PRE>
3 1 2 0 0 3 2 1 30 13 22 01
0 2 1 3 2 1 0 3 = 02 21 10 33
1 3 0 2 1 2 3 0 11 32 03 20
2 0 3 1 3 0 1 2 23 00 31 12
</PRE>
<P>Also see
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A>.
<A NAME = "OTP"></A>
<P><DT><B>OTP</B>
<DD>
<A HREF = "#OneTimePad">One Time Pad</A>.
<A NAME = "OverallDiffusion"></A>
<P><DT><B>Overall Diffusion</B>
<DD>That property of an ideal
<A HREF = "#BlockCipher">block cipher</A> in which a change of
even a single message or
<A HREF = "#Plaintext">plaintext</A>
<A HREF = "#Bit">bit</A> will change every
<A HREF = "#Ciphertext">ciphertext</A> bit with
probability 0.5. In practice, a good block cipher will approach
this ideal. This means that about half of the output bits
should change for any possible change to the input block.
<P>Overall diffusion means that the ciphertext will appear to
change at
<A HREF = "#Random">random</A> even between related message blocks,
thus hiding message relationships which might be used to
<A HREF = "#Attack">attack</A> the cipher.
<P>Overall diffusion can be measured statistically in a realized
cipher and used to differentiate between better and worse
designs. Overall diffusion does not, by itself, define a good
cipher, but it is <I>required</I> in a good block cipher.
<P>Also see
<A HREF = "#Diffusion">diffusion</A>,
<A HREF = "#Avalanche">avalanche</A>,
<A HREF = "#StrictAvalancheCriterion">strict avalanche criterion</A>
and
<A HREF = "#Complete">complete</A>.
<A NAME = "Padding"></A>
<P><DT><HR><P><B>Padding</B>
<DD>In classical
<A HREF = "#Cryptography">cryptography</A>,
<A HREF = "#Random">random</A> data added to the start and
end of messages so as to conceal the length of the message, and
the position where coding actually starts.
<P>In more conventional computing, some additional data needed to
fill-out a fixed-size data structure. This meaning also exists
in cryptography, where the last
<A HREF = "#Block">block</A> of a fixed-size
<A HREF = "#BlockCipher">block cipher</A> often must be padded to
fill the block.
<A NAME = "Password"></A>
<P><DT><B>Password</B>
<DD>A
<A HREF = "#Key">key</A>, in the form of a word. Also "pass phrase,"
for multiple-word keys. See:
<A HREF = "#UserAuthentication">user authentication</A>.
<A NAME = "Patent"></A>
<P><DT><B>Patent</B>
<DD>The legal right, formally granted by a government, to exclude
others from making, selling or using the particular invention
described in the patent deed. (The term "selling" is generally
understood to cover free distribution.) Note that a patent is
<I>not</I> the right to make the invention, if it is covered by
<I>other</I> unexpired patents. A patent constitutes the open
<I>publication</I> of an invention, in return for a limited-term
monopoly on its use. A patent is said to protect the <I>application</I>
of an idea (as opposed to the idea itself), and is distinct from
copyright, which protects the <I>expression</I> of an idea.
<P>The concept behind patenting is to establish intellectual property
in a way somewhat related to a mining claim or real estate. An
inventor of a machine or process can file a <I>claim</I> on the
innovation, provided that it is not previously published, and that
someone else does not already have such a claim. Actual patents
normally do not claim an overall machine, but just the
newly-innovative part, and wherever that part is used, it must be
licensed from the inventor. It is common for an inventor to refine
earlier work patented by someone else, but if the earlier patent has
not expired, the resulting patent often cannot be practiced without
a license from the earlier patent holder.
<P>Someone who comes up with a patentable invention and wishes to
give up <I>their</I> rights can simply publish a full description
of the invention. Simple publication should prevent an application
from anyone who has not already established legal proof that they
previously came up with the same invention. In the U.S., publication
also apparently sets a 1-year clock running for an application to be
filed by anyone who <I>does</I> have such proof. But coming up with
an invention does not take away <I>someone else's</I> rights if they
came up with the same thing first, they may have a year to file, and
their case might take several years to prosecute and issue.
<P>In the U.S., a patent is a non-renewable grant, previously lasting
17 years from issue date, now lasting 20 years from application date.
Both an application fee and an issue fee are required, as are
periodic "maintenance" fees throughout the life of the patent. There
are four main requirements:
<OL><P>
<P><B><LI>Statutory Class</B> (35 USC 101): The invention must
be either:
<UL>
<LI>a process,
<LI>a machine,
<LI>a manufacture,
<LI>a composition of materials, or
<LI>a new use for one of the above.
</UL>
<P><B><LI>Utility</B> (35 USC 101): The invention must be of some
use.
<P><B><LI>Novelty</B> (35 USC 102): The invention must have some
aspect which is different from all previous inventions and
public knowledge.
<P>A U.S. patent is <B>not</B> available if -- <B>before the
<I>invention</I> date</B> -- the invention was:
<UL>
<LI>Publicly known or used the United States of America, or
<LI>Described in a printed publication (e.g., available
at a public library) anywhere
</UL>
(35 USC 102(a)).
<P>A U.S. patent is <B>not</B> available if --
<B>more than a year before the <I>application</I> date</B> -- the
invention was:
<UL>
<LI>In public use or on sale in the United States of
America, or
<LI>Described in a printed publication (e.g., available
at a public library) anywhere
</UL>
(35 USC 102(b)).
<P><B><LI>Unobviousness</B> (35 USC 103): The invention must have
<B>not</B> been obvious <I>to someone of ordinary skill</I> in
the field of the invention <I>at the time of the invention.</I>
Unobviousness has various general arguments, such as:
<UL>
<LI>Unexpected Results,
<LI>Unappreciated Advantage.
<LI>Solution of Long-Felt and Unsolved Need, and
<LI>Contrarian Invention (contrary to teachings of the
prior art),
</UL>
among many others.
</OL>
<P>When the same invention is claimed by different inventors, deciding
who has "priority" to be awarded the patent can require
<I>legally provable</I> dates for both "conception" and
"reduction to practice":
<UL>
<LI><I>Conception</I> can be proven by disclosure to others,
preferably in documents which can be signed and dated as
having been read and understood. The readers can then
testify as to exactly what was known and when it was known.
<LI><I>Reduction to Practice</I> may be the patent application
itself, or requires others either to watch the invention
operate or to make it operate on behalf of the inventor.
These events also should be carefully recorded in written
documents with signatures and dates.
</UL>
"In determining priority of invention, there shall be considered
not only the respective dates of conception and reduction to
practice of the invention, but also the reasonable diligence of
one who was first to conceive and last to reduce to practice . . ."
(35 USC 102(g)).
<P>Also see:
<A HREF = "#PriorArt">prior art</A> and our
<A HREF = "PATS/PATPOLI.HTM#ClaimsTutorial">claims tutorial</A>.
<P>In practice, a patent is rarely the intrusive prohibitive right
that it may at first appear to be, because patents are really about
<I>money</I> and <I>respect.</I> Ideally, a patent rewards the
inventor for doing research and development, and then disclosing an
invention to the
public; it is also a legal recognition of a contribution to society.
If someone infringes a patent in a way which affects sales, or
which implies that the inventor cannot do anything about it, the
patent holder can be expected to show some interest. But when little
or no money is involved, a patent can be
<A HREF = "#PatentInfringement">infringed</A> repeatedly with
little or no response, and typically this will have no effect on
future legal action.
<P>This simple introduction cannot begin to describe the complexity
involved in filing and prosecuting a patent application. Your author
does <I>not</I> recommend going it alone, unless one is willing to
put far more time into learning about it and doing it than one could
possibly imagine.
<A NAME = "PatentInfringement"></A>
<P><DT><B>Patent Infringement</B>
<DD>Patent infringement occurs when someone makes, sells, or uses a
<A HREF = "#Patent">patented</A> invention without license from the
patent holder.
<P>Normally the offender will be contacted, and there may be a
settlement and proper licensing, or the offender may be able to
design around the patent, or offender may simply stop infringing.
Should none of these things occur, the appropriate eventual
response is a patent infringement lawsuit in federal court.
<A NAME = "PerfectSecrecy"></A>
<P><DT><B>Perfect Secrecy</B>
<DD>The unbreakable
<A HREF = "#Strength">strength</A> delivered by a
<A HREF = "#Cipher">cipher</A> in which all possible
<A HREF = "#Ciphertext">ciphertexts</A> may be
<A HREF = "#Key">key</A>-selected with equal probability given any
possible
<A HREF = "#Plaintext">plaintext</A>. This means that no ciphertext
can imply any particular plaintext any more than any other.
This sort of cipher needs as much keying information as there is
message information to be protected.
A cipher with perfect secrecy has at least as many keys as messages,
and may be seen as a (huge)
<A HREF = "#LatinSquare">Latin square</A>.
<P>There are some examples:
<UL>
<LI>(Theoretically) the
<A HREF = "#OneTimePad">one-time pad</A> with a perfectly
<A HREF = "#Random">random</A> pad generator.
<LI>The
<A HREF = "#DynamicTransposition">Dynamic
Transposition</A> cipher approaches perfect secrecy in
that every ciphertext is a bit-permuted balanced block. Thus,
every possible plaintext block is just a particular permutation
of any ciphertext block. Since the permutation is created by a
<A HREF = "#Key">keyed</A>
<A HREF = "#RNG">RNG</A>, we expect any particular permutation
to "never" re-occur, and be easily protected from
<A HREF = "#DefinedPlaintextAttack">defined plaintext attack</A>
with the usual
<A HREF = "#MessageKey">message key</A>.
We also expect that the RNG itself will be protected by the
vast number of different sequences which could produce the
exact same bit-pattern for any ciphertext result.
</UL>
<P> Also see:
<A HREF = "#IdealSecrecy">ideal secrecy</A>. From Claude Shannon.
<A NAME = "Permutation"></A>
<P><DT><B>Permutation</B>
<DD>The mathematical term for a particular arrangement of symbols,
objects, or other elements. With <I>n</I> symbols, there are
<PRE>
P(n) = n*(n-1)*(n-2)*...*2*1 = n!
</PRE>
or <I>n</I>-
<A HREF = "#Factorial">factorial</A> possible permutations.
The number of permutations of <I>n</I> things taken <I>k</I> at a
time is:
<PRE>
P(n,k) = n! / (n-k)!
</PRE>
<P>See the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Permutations">permutations</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages. Also see
<A HREF = "#Combination">combination</A> and
<A HREF = "#SymmetricGroup">symmetric group</A>.
<P>A
<A HREF = "#BlockCipher">block cipher</A> can be seen as a
transformation between
<A HREF = "#Plaintext">plaintext</A>
<A HREF = "#Block">block</A> values and
<A HREF = "#Ciphertext">ciphertext</A> block values,
and is thus an emulated
<A HREF = "#SimpleSubstitution">simple substitution</A> on huge
block-wide values. Both plaintext and ciphertext have the same
set of possible block values, and when the ciphertext values have
the same ordering as the plaintext, ciphering is obviously
ineffective.
So <I>effective</I> ciphering depends upon <I>re-arranging</I> the
ciphertext values from the plaintext ordering, which is a
<I>permutation</I> of the plaintext values. A block cipher is
<A HREF = "#Key">keyed</A> by constructing a <I>particular</I>
permutation of ciphertext values.
<P>Within an explicit
<A HREF = "#SubstitutionTable">table</A>, an arbitrary permutation
(one of the set of all possible permutations) can be produced by
<A HREF = "#Shuffle">shuffling</A> the elements under the
control of a
<A HREF = "#RandomNumberGenerator">random number generator</A>.
If, as usual, the random number generator has been initialized
from a
<A HREF = "#Key">key</A>, a particular permutation can be
produced for each particular key; thus, each key selects a
particular permutation.
<P>Also, the second part of
<A HREF = "#SubstitutionPermutation">substitution-permutation</A>
<A HREF = "#BlockCipher">block ciphers</A>:
First,
<A HREF = "#SimpleSubstitution">substitution</A> operations
<A HREF = "#Diffusion">diffuse</A> information across the
width of each substitutions. Next, "permutation" operations act
to re-arrange the bits of the substituted result (more clearly
described as a set of
<A HREF = "#Transposition">transpositions</A>); this ends a
single round. In subsequent rounds, further substitutions
and transpositions occur until the block is thoroughly mixed and
<A HREF = "#OverallDiffusion">overall diffusion</A> hopefully
achieved.
<A NAME = "PGP"></A>
<P><DT><B>PGP</B>
<DD>A popular
<A HREF = "#PublicKeyCipher">public key cipher</A> system
using both
<A HREF = "#RSA">RSA</A> and
<A HREF = "#IDEA">IDEA</A> ciphers. RSA is used to tranfer
a random key; IDEA is used to actually protect the message.
<P>One problem with PGP is a relatively unworkable facility
for
<A HREF = "#Authentication">authenticating</A> public keys.
While the users can compare a cryptographic hash of a key, this
requires communication through a different channel, which is more
than most users are willing to do. The result is a system
which generally supports
<A HREF = "#ManInTheMiddleAttack">man-in-the-middle attacks</A>,
and these do <B>not</B> require "breaking" either of the ciphers.
<A NAME = "PhysicallyRandom"></A>
<P><DT><B>Physically Random</B>
<DD>A random value or sequence derived from a physical source,
typically thermal-electrical noise. Also called
<A HREF = "#ReallyRandom">really random</A> and
<A HREF = "#TrulyRandom">truly random</A>.
<A NAME = "PinkNoise"></A>
<P><DT><B>Pink Noise</B>
<DD>A
<A HREF = "#Random">random</A>-like signal in which the magnitude
of the spectrum at each
<A HREF = "#Frequency">frequency</A> is proportional to the inverse
of the frequency, or 1/f. At twice the frequency, we have half the
energy, which is -3
<A HREF = "#dB">dB</A>. This is a frequency-response slope of
-3 dB / octave, or -10 dB / decade. As opposed to
<A HREF = "#WhiteNoise">white noise</A>, which has the same energy
at all frequencies, pink noise has more low-frequency or "red"
components, and so is called "pink."
<P>A common frequency response has half the output
<A HREF = "#Voltage">voltage</A> at twice the frequency. But this
is actually one-quarter the power and so is a -6 dB / octave drop.
For pink noise, the desired voltage drop per octave is 0.707.
<A NAME = "Plaintext"></A>
<P><DT><B>Plaintext</B>
<DD>Plaintext is the original, readable message. It is convenient
to think of plaintext as being actual language characters, but
may be any other symbols or values (such as arbitrary
<A HREF = "#Computer">computer</A>
data) which need to be protected.
<A NAME = "PoissonDistribution"></A>
<P><DT><B>Poisson Distribution</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a simplified form of the
<A HREF = "#BinomialDistribution">binomial distribution</A>,
justified when we have:
<P><OL>
<LI>a large number of trials <I>n,</I>
<LI>a small probability of success <I>p,</I> and
<LI>an expectation <I>np</I> much smaller than SQRT(<I>n</I>).
</OL>
<P>The probability of finding exactly <I>k</I> successes when we
have expectation <I>u</I> is:
<PRE>
k -u
P(k,u) = u e / k!
</PRE>
where <I>e</I> is the base of natural logarithms:
<PRE>
e = 2.71828...
</PRE>
and <I>u</I> is:
<PRE>
u = n p
</PRE>
again for <I>n</I> independent trials, when each trial has
success probability <I>p.</I> In the Poisson distribution,
<I>u</I> is also both the mean and the variance
<P>The ideal
<A HREF = "#Distribution">distribution</A> is produced by evaluating
the probability function for all possible <I>k,</I> from 0 to
<I>n.</I>
<P>If we have an experiment which we think <I>should</I> produce a
Poisson distribution, and then repeatedly and systematically find
very improbable test values, we may choose to reject the
<A HREF = "#NullHypothesis">null hypothesis</A> that the experimental
distribution is in fact Poisson.
<P>Also see the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/BINOMPOI.HTM#Poisson">Poisson</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.
<A NAME = "PolyalphabeticCombiner"></A>
<P><DT><B>Polyalphabetic Combiner</B>
<DD>A
<A HREF = "#Combiner">combining</A>
<A HREF = "#Mechanism">mechanism</A> in which one input selects a
substitution alphabet (or table), and another input selects a
value from within the selected alphabet, said value becoming the
combining result. Also called a
<A HREF = "#TableSelectionCombiner">Table Selection Combiner</A>.
<A NAME = "PolyalphabeticSubstitution"></A>
<P><DT><B>Polyalphabetic Substitution</B>
<DD>A type of
<A HREF = "#Substitution">substitution</A> in which multiple
distinct
<A HREF = "#SimpleSubstitution">simple substitution</A>
alphabets are used.
<A NAME = "PolygramSubstitution"></A>
<P><DT><B>Polygram Substitution</B>
<DD>A type of
<A HREF = "#Substitution">substitution</A> in which one or more
symbols are substituted for one or more symbols. The most
general possible substitution.
<A NAME = "Polygraphic"></A>
<P><DT><B>Polygraphic</B>
<DD>Greek for "multiple letters." A
<A HREF = "#Cipher">cipher</A> which translates multiple
<A HREF = "#Plaintext">plaintext</A>
symbols at a time into
<A HREF = "#Ciphertext">ciphertext</A>.
As opposed to
<A HREF = "#Monographic">monographic</A>; also see
<A HREF = "#Homophonic">homophonic</A> and
<A HREF = "#Polyphonic">polyphonic</A>.
<A NAME = "Polynomial"></A>
<P><DT><B>Polynomial</B>
<DD>Mathematically, an expression in the standard form of:
<BIG><PRE>
c<SUB>n</SUB>x<SUP>n</SUP> + . . . + c<SUB>1</SUB>x + c<SUB>0</SUB>
</PRE></BIG>
The c's or <I>coefficients</I> are elements of some
<A HREF = "#Field">field</A> F.
The <I>degree</I> <I>n</I> is the value of the exponent of the
highest power term.
A <A HREF = "#Mod2Polynomial">mod 2 polynomial</A> of degree <I>n</I>
has <I>n</I>+1 bits representing the coefficients for each power:
<I>n</I>, <I>n</I>-1, ..., 1, 0.
<P>Perhaps the most insightful part of this is that the addition
of coefficients for a particular power does not "carry" into other
coefficients or columns.
<A NAME = "Polyphonic"></A>
<P><DT><B>Polyphonic</B>
<DD>Greek for "multiple sounds." The concept of having a letter
sequence which is pronounced in distinctly different ways,
depending on context. In
<A HREF = "#Cryptography">cryptography</A>, a
<A HREF = "#Cipher">cipher</A> which uses a single
<A HREF = "#Ciphertext">ciphertext</A> symbol to represent multiple
different
<A HREF = "#Plaintext">plaintext</A> symbols.
Also see
<A HREF = "#Homophonic">homophonic</A>,
<A HREF = "#Polygraphic">polygraphic</A> and
<A HREF = "#Monographic">monographic</A>.
<A NAME = "Population"></A>
<P><DT><B>Population</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the size, or the number of
distinct elements in the possibly hidden universe of elements which
we can only know by
<A HREF = "#Sample">sampling</A>.
<A NAME = "PopulationEstimation"></A>
<P><DT><B>Population Estimation</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, techniques used to predict
the
<A HREF = "#Population">population</A> based only on information
from random
<A HREF = "#Sample">samples</A> on that population. See
<A HREF = "#AugmentedRepetitions">augmented repetitions</A>.
<A NAME = "Power"></A>
<P><DT><B>Power</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the probability of rejecting
a false
<A HREF = "#NullHypothesis">null hypothesis</A>, and thus accepting
a true
<A HREF = "#AlternativeHypothesis">alternative hypothesis</A>.
<P>In
<A HREF = "#DC">DC</A>
<A HREF = "#Electronic">electronics</A>, simply
<A HREF = "#Voltage">voltage</A> times
<A HREF = "#Current">current</A>. In
<A HREF = "#AC">AC</A> electronics, the instantaneous product of
voltage times current, integrated over a repetitive cycle.
In either case the result is in watts, denoted W.
<A NAME = "Primitive"></A>
<P><DT><B>Primitive</B>
<DD>A value within a
<A HREF = "#FiniteField">finite field</A> which, when taken to
increasing powers, produces all field values except zero.
A primitive binary
<A HREF = "#Polynomial">polynomial</A> will be
<A HREF = "#Irreducible">irreducible</A>, but not all
irreducibles are necessarily primitive.
<A NAME = "PrimitivePolynomial"></A>
<P><DT><B>Primitive Polynomial</B>
<DD>An
<A HREF = "#Irreducible">irreducible</A>
<A HREF = "#Polynomial">polynomial</A>,
<A HREF = "#Primitive">primitive</A> within a given
<A HREF = "#Field">field</A>, which generates a
<A HREF = "#MaximalLength">maximal length</A> sequence in
<A HREF = "#LinearFeedbackShiftRegister">linear feedback shift
register</A> (LFSR) applications.
<P>All primitive polynomials are
<A HREF = "#Irreducible">irreducible</A>, but irreducibles are
not necessarily primitive, unless the degree of the polynomial is a
<A HREF = "#MersennePrime">Mersenne prime</A>.
One way to find a primitive polynomial is to select an appropriate
Mersenne prime degree and find an irreducible using Algorithm A
of Ben Or:
<PRE>
1. Generate a monic random polynomial gx of degree n over GF(q);
2. ux := x;
3. for k := 1 to (n DIV 2) do
4. ux := ux^q mod gx;
5. if GCD(gx, ux-x) <> 1 then go to 1 fi;
6. od
</PRE>
<BLOCKQUOTE>
Ben-Or, M. 1981. Probabilistic algorithms in finite fields.
<I>Proceedings of the 22nd IEEE Foundations of Computer Science
Symposium.</I> 394-398.
</BLOCKQUOTE>
<P>The result is a certified irreducible.
<A HREF = "#GF2n">GF(q)</A> represents the Galois Field to the
prime base <I>q</I>; for
<A HREF = "#Mod2">mod 2</A> polynomials, <I>q</I> is 2. These
computations require
<A HREF = "#Mod2Polynomial">mod 2 polynomial</A> arithmetic
operations for polynomials of large degree; "<I>ux<SUP>q</SUP></I>"
is a polynomial squared, and "mod <I>gx</I>" is a polynomial division.
A "monic" polynomial has a leading coefficient of 1; this is a
natural consequence of mod 2 polynomials of any degree. The first
step assigns the polynomial "x" to the variable <I>ux</I>; the
polynomial "x" is x<SUP>1</SUP>, otherwise known as "10".
<P>To get primitives of non-Mersenne prime degree n, we certify
irreducibles P of degree n. To do this, we must factor the value
<NOBR>2<SUP>n</SUP> - 1</NOBR> (which can be a difficult problem,
in general). Then, for each factor d of <NOBR>2<SUP>n</SUP> - 1</NOBR>
we create the polynomial T(d) which is <NOBR>x<SUP>d</SUP> + 1</NOBR>;
this is a polynomial with just two bits set: bit d and bit 0. If P
evenly divides T(d) for some divisor d, P cannot be primitive.
So if P does not divide any T(d) for all distinct divisors d of
<NOBR>2<SUP>n</SUP> - 1,</NOBR> P is primitive.
<A NAME = "Prime"></A>
<P><DT><B>Prime</B>
<DD>In general, a positive
<A HREF = "#Integer">integer</A> which is evenly divisible only
by itself and 1.
<P>Small primes can be found though the ever-popular
<A HREF = "#SieveOfEratosthenes">Sieve of Eratosthenes</A>, which
can also be used to develop a list of small primes used for
testing individual values.
A potential prime need only be divided by each prime equal to or
less than the square-root of the value of interest; if any
remainder is zero, the number is not prime.
<P>Large primes can be found by probabilistic tests.
<A NAME = "PriorArt"></A>
<P><DT><B>Prior Art</B>
<DD>In
<A HREF = "#Patent">patents</A>, the knowledge published or otherwise
available to the public as of some date. Traditionally, this
"knowledge" is in ink-on-paper articles or patents, both of which
have provable release dates. Private "in house" journals
available only within a company generally would not be prior art,
nor would information which has been kept secret. Normally, we
expect prior art information to be available in a public library.
<P>In a U.S. application for patent, we are interested in the state
of the open or public art as it existed as of the invention date,
and also one year prior to the filing date. It is that art -- and
not something hidden or something later -- against which the new
application must be judged. Many things which seem "obvious" in
retrospect were really quite innovative at the time they were done.
<A NAME = "PRNG"></A>
<P><DT><B>PRNG</B>
<DD>
<A HREF = "#PseudoRandom">Pseudo Random</A>
Number Generator.
In general, <I>pseudo</I>randomness is the norm. Any
<A HREF = "#Computer">computer</A>
<A HREF = "#RandomNumberGenerator">random number generator</A>
which is not explicitly labeled as
<A HREF = "#PhysicallyRandom">physically random</A>,
<A HREF = "#ReallyRandom">really random</A>, or other such
description, is almost certainly <I>pseudo</I>random.
<A NAME = "Process"></A>
<P><DT><B>Process</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a sequence of values;
a source or generator of such a sequence; a function.
<A NAME = "PseudoRandom"></A>
<P><DT><B>Pseudorandom</B>
<DD>A value or sequence of values typically produced by a
<A HREF = "#RandomNumberGenerator">random number generator</A>, a
<A HREF = "#Deterministic">deterministic</A> computational
mechanism. As opposed to
<A HREF = "#ReallyRandom">really random</A>. Also see
<A HREF = "#Random">random</A>.
<P>The usual random number generator is actually
<B><I>pseudo</I>random</B>. Given the initial
<A HREF = "#State">state</A>, the entire subsequent sequence is
completely pre-determined, but nevertheless exhibits many of the
expected characteristics of a random sequence.
Pseudorandomness supports generating the exact same cryptographic
sequence repeatedly at different times or locations.
Pseudorandomness is generally produced by a mathematical process,
which may provide good assurances as to the resulting
<A HREF = "#Statistics">statistics</A>,
assurances which a really random generator generally cannot provide.
<A NAME = "PublicKeyCipher"></A>
<P><DT><B>Public Key Cipher</B>
<DD>Also called an
<A HREF = "#AsymmetricCipher">asymmetric cipher</A> or a
<I>two-key</I> cpher. A
<A HREF = "#Cipher">cipher</A> which uses one
<A HREF = "#Key">key</A> to
<A HREF = "#Encipher">encipher</A> a message, and a <I>different</I>
key to
<A HREF = "#Decipher">decipher</A> the resulting
<A HREF = "#Ciphertext">ciphertext</A>. This allows the enciphering
key to be exposed, without exposing the message. As opposed to a
<A HREF = "#SecretKeyCipher">secret key cipher</A>.
<P>Either key can be used for enciphering or deciphering. Usually
the exposed key is called the "public" key, and the retained hidden
key is called the "private" key. The public key is distributed
widely, so anyone can use it to encipher a message which presumably
can only be deciphered by the hidden private key on the other end.
Note that the enciphering end normally does not possess a key which
will decipher a message which was just enciphered.
<P>The whole scheme of course depends upon the idea that the private
key cannot be developed from knowledge of the public key. The cipher
also must resist both
<A HREF = "#KnownPlaintextAttack">known-plaintext</A> <I>and</I>
<A HREF = "#DefinedPlaintextAttack">defined-plaintext</A> attack
(since anyone can generate any amount of plaintext and encipher it).
A public key cipher is vastly slower than a secret key cipher, and
so is normally used simply to deliver the
<A HREF = "#MessageKey">message key</A> or session key
for a conventional or secret key cipher.
<P>Although at first proclaimed as a solution to the
<A HREF = "#KeyDistributionProblem">key distribution problem</A>,
it soon became apparent that someone could <I>pretend</I> to
be someone else, and send out a "spoofed" public key. When people
use that key, the spoofer could receive the message, decipher and
read it, then re-encipher the message under the correct key and
send it to the correct destination. This is known as a
<A HREF = "#ManInTheMiddleAttack">man-in-the-middle</A> (MITM)
attack.
<P>A MITM attack is unusual in that it can penetrate cipher security
<I>without</I>
"<A HREF = "#Break">breaking</A>" <I>either</I> the public key cipher
<I>or</I> the internal secret key cipher, and takes almost no
computational effort. This is <I>extremely serious</I> because it
means that the use of even "unbreakable" ciphers is <B>not</B>
sufficient to guarantee privacy. All the effort spent on proving
the strength of either cipher is simply wasted when a MITM attack
is possible, and MITM attacks are only possible with public key
ciphers.
<P>To prevent spoofing, public keys must be
<A HREF = "#Authentication">authenticated</A> (or <I>validated</I> or
<I>certified</I>) as representing who they claim to represent. This can
be almost as difficult as the conventional key distribution problem
and generally requires complex protocols. And a failure in a key
certification protocol can expose a system which uses "unbreakable"
ciphers. In contrast, the simple use of an "unbreakable" secret key
cipher (with hand-delivered keys) <B>is</B> sufficient to guarantee
security. This is a real, vital difference between ciphering
models.
<A NAME = "Random"></A>
<P><DT><HR><P><B>Random</B>
<DD>A process which selects unpredictably, each time independent of
all previous times, from among multiple possible results; or a
result from such a process.
Ideally, an arbitrary
<A HREF = "#State">stateless</A> selection from among equiprobable
outcomes, thus producing a
<A HREF = "#UniformDistribution">uniform distribution</A> of values.
The absence of pattern. Also see
<A HREF = "#PseudoRandom">pseudorandom</A>.
<P>Randomness is an attribute of the <I>process</I> which generates
or selects "random" numbers rather than the numbers themselves.
But the numbers do carry the ghost of their creation:
If values really are randomly generated with the same probability,
we expect to find <I>almost</I> the same number of occurrences of
each value or each sequence of the same length. Over many values
and many sequences we expect to see results form in
<A HREF = "#Distribution">distributions</A>
which accord with our understanding of random processes. So if we
do not find these expectations in the resulting numbers, we may
have reason to suspect that the generating process is not random.
Unfortunately, any such suspicion is necessarily
<A HREF = "#Statistic">statistical</A> in
nature, and cannot produce absolute proof in either direction:
Randomness can produce <I>any</I> relationship between values,
including apparent
<A HREF = "#Correlation">correlations</A> (or their lack) which do
not in fact represent the systematic production of the generator.
(Also see the discussions of randomness testing in
<A HREF = "#Statistics">Statistics</A> and
<A HREF = "#NullHypothesis">Null Hypothesis</A>, the article
<A HREF = "http://www.io.com/~ritter/ARTS/RUNSUP.HTM">Chi-Square
Bias in Runs-Up/Down RNG Tests</A>, also
<A HREF = "http://www.io.com/~ritter/RES/RANDTEST.HTM">Randomness
Tests: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page, and
<A HREF = "http://www.io.com/~ritter/NETLINKS.HTM#RandomnessLinks">Randomness
Links</A>, in
<A HREF = "http://www.io.com/~ritter/NETLINKS.HTM">Ritter's
Net Links</A> page.)
<P>From one point of view, there are no "less random" or "more
random" sequences, since any sequence can be produced by a
random process. And any sequence (at least any <I>particular</I>
sequence) <I>also</I> can be produced by a
<A HREF = "#Deterministic">deterministic</A> computational
<A HREF = "#RandomNumberGenerator">random number generator</A>.
(We note that such generators are specifically designed to and do
pass statistical randomness tests.)
So the difference is not in the sequences, <I>per se,</I> but
instead in the generators: For one thing, an RNG sequence is
deterministic and therefore may somehow be predicted. But, in
practice, extensive analysis could show deviations from randomness
in <I>either</I> the deterministic RNG designs <I>or</I> the
nondeterministic
<A HREF = "#ReallyRandom">really random</A> generation equipment,
and this could make even a nondeterministic generator somewhat
predictable.
<P>There are "more complex" and "less complex"
sequences according to various measures. For example:
<UL>
<LI><A HREF = "#LinearComplexity">Linear complexity</A> grades
sequences on the size of the minimum shift-register
<A HREF = "#State">state</A> needed to produce the sequence.
<LI>Kolmogorov-Chaitin complexity
grades sequences on the size of the description of the
algorithm needed to produce the sequence.
</UL>
These measures produce values related to the amount of pattern in
a sequence, or the extent to which a sequence can be predicted by
some algorithmic model. Such values describe the uncertainty of
a sequence, and are in this way related to
<A HREF = "#Entropy">entropy</A>.
<P>We should note that the subset of sequences which have a high
linear complexity leaves a substantial subset which does not. So
if we avoid sequences with low linear complexity, any sequence we
do accept must be <I>more</I> probable than it would be in the
unfiltered set of all possible sequences. In this case, the
expected higher uncertainty of the sequence itself is at least
partly offset by the certainty that such a sequence will be used.
Similar logic applies to
<A HREF = "#S-Box">S-box</A> measurement and selection.
<P>Oddly -- and much like
<A HREF = "#Strength">strength</A> in
<A HREF = "#Cipher">ciphers</A> -- the "unpredictable" part of
randomness is
<A HREF = "#Contextual">contextual</A> and
<A HREF = "#Subjective">subjective</A>, rather than the
<A HREF = "#Absolute">absolute</A> and
<A HREF = "#Objective">objective</A> qualities we like in Science.
While the sequence from a complex
<A HREF = "#RNG">RNG</A> can <I>appear</I> random, if we know the
secret of the generator construction, and its
<A HREF = "#State">state</A>, we can predict
the sequence exactly. But often we are in the position of seeing the
sequence alone, <I>without</I> knowing the source, the construction,
or the internal state. So while <I>we</I> might see a sequence as
"random," that same sequence might be absolutely predictable (and
thus <I>not</I> random) to someone who knows "the secret."
<A NAME = "RandomNumberGenerator"></A>
<P><DT><B>Random Number Generator</B>
<DD>A
<A HREF = "#Random">random</A> number generator is a standard
computational tool which creates a sequence of apparently
unrelated numbers which are often used in statistics and other
computations.
<P>In practice, most random number generators are
<A HREF = "#Deterministic">deterministic</A> computational
mechanisms, and each number is directly determined from the
previous
<A HREF = "#State">state</A> of the mechanism. Such a sequence is
often called
<A HREF = "#PseudoRandom">pseudo-random</A>, to distinguish it
from a
<A HREF = "#ReallyRandom">really random</A>, sequence somehow
composed of actually unrelated values.
<P>A computational random number generator will always generate
the same sequence if it is started in the same state. So if we
initialize the state from a
<A HREF = "#Key">key</A>, we can use the random number
generator to
<A HREF = "#Shuffle">shuffle</A> a table into a particular order
which we can reconstruct any time we have the same key.
(See, for example:
<A HREF = "http://www.io.com/~ritter/KEYSHUF.HTM">A Keyed
Shuffling System for Block Cipher Cryptography</A>.)
<P>Note that random number generators are <I>designed</I> to
pass the many
<A HREF = "#Statistic">statistical</A> tests of randomness;
clearly, such tests do not indicate a
<A HREF = "#ReallyRandom">really random</A> sequence.
Moreover, if we define "random" as "the absence of any pattern,"
the only way we could validate such a sequence is by checking for
every possible pattern. But there are too many patterns, so "real"
randomness would seem to be impossible to check experimentally.
(Also see the discussions of randomness testing in
<A HREF = "#Statistics">Statistics</A> and
<A HREF = "#NullHypothesis">Null Hypothesis</A>.)
<P>Also see the article:
<A HREF = "http://www.io.com/~ritter/ARTS/CRNG2ART.HTM">The Efficient
Generation of Cryptographic Confusion Sequences</A>, plus
<A HREF = "http://www.io.com/~ritter/RES/RNGENS.HTM">RNG
Implementations: A Literature Survey</A>,
<A HREF = "http://www.io.com/~ritter/RES/RNGSURVE.HTM">RNG
Surveys: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page, and
<A HREF = "http://www.io.com/~ritter/NETLINKS.HTM#RandomnessLinks">Randomness
Links</A>, in
<A HREF = "http://www.io.com/~ritter/NETLINKS.HTM">Ritter's
Net Links</A> page.
<A NAME = "RandomVariable"></A>
<P><DT><B>Random Variable</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a term or label for an
unknown value. Also used when each of the possible values have
some known probability.
<P>A <I>discrete</I> random variable takes on a finite set of values.
The probability of each value is the <I>frequency function</I> or
<I>probability density function</I>, and the graph of the frequency
function is the frequency
<A HREF = "#Distribution">distribution</A>.
<A NAME = "Range"></A>
<P><DT><B>Range</B>
<DD>The set of the results from a
<A HREF = "#Mapping">mapping</A> for all possible arguments.
Also see:
<A HREF = "#Domain">domain</A>.
<A NAME = "ReallyRandom"></A>
<P><DT><B>Really Random</B>
<DD>A
<A HREF = "#Random">random</A> value or sequence derived from a
source which is expected to produce no predictable or repeatable
relationship between values.
<P>Examples of a really random source might include radioactive
decay, Johnson or thermal noise, shot noise from a Zener diode or
reverse-biased junction in breakdown, etc. Clearly, some sort of
circuitry will be required to detect these generally low-level
events, and the quality of the result is often directly related
to the design of the
<A HREF = "#Electronic">electronic</A> processing.
Other sources of randomness might be precise keystroke timing,
and the accumulated hash of text of substantial size.
Also called
<A HREF = "#PhysicallyRandom">physically random</A> and
<A HREF = "#TrulyRandom">truly random</A>.
As opposed to
<A HREF = "#PseudoRandom">pseudorandom</A> (see
<A HREF = "#RandomNumberGenerator">random number generator</A>).
<P>Really random values are particularly important as
<A HREF = "#MessageKey">message key</A> objects, or as a sequence
for use in a realized
<A HREF = "#OneTimePad">one-time pad</A>.
<P>Also see:
<A HREF = "http://www.io.com/~ritter/RES/RNGMACH.HTM">Random Number
Machines: A Literature Survey</A> and
<A HREF = "http://www.io.com/~ritter/RES/NOISE.HTM">Random Electrical
Noise: A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page, and
<A HREF = "http://www.io.com/~ritter/NETLINKS.HTM#RandomnessLinks">Randomness
Links</A>, in
<A HREF = "http://www.io.com/~ritter/NETLINKS.HTM">Ritter's
Net Links</A> page.
<A NAME = "Relay"></A>
<P><DT><B>Relay</B>
<DD>Classically, an electro-mechanical
<A HREF = "#Component">component</A> consisting of a mechanical
<A HREF = "#Switch">switch</A> operated by the magnetic force
produced by an electromagnet, a conductor wound around an iron
dowel or core.
A relay is at least potentially a sort of mechanical (slow) and
<A HREF = "#Linear">nonlinear</A>
<A HREF = "#Amplifier">amplifier</A> which is well-suited to
power control.
<A NAME = "ResearchHypothesis"></A>
<P><DT><B>Research Hypothesis</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the statement formulated so
that the logically contrary statement, the
<A HREF = "#NullHypothesis">null hypothesis</A> <I>H</I><SUB>0</SUB>
has a test
<A HREF = "#Statistic">statistic</A> with a known
<A HREF = "#Distribution">distribution</A> for the case when there
is nothing unusual to detect. Also called the
<A HREF = "#AlternativeHypothesis">alternative hypothesis</A>
<I>H</I><SUB>1</SUB>, and logically identical to
"NOT-<I>H</I><SUB>0</SUB>" or "<I>H</I><SUB>0</SUB>
is not true."
<A NAME = "Resistor"></A>
<P><DT><B>Resistor</B>
<DD>A basic electronic
<A HREF = "#Component">component</A> in which
<A HREF = "#Voltage">voltage</A> and
<A HREF = "#Current">current</A> are
<A HREF = "#Linear">linearly</A> related by
Ohm's Law: <NOBR><B>E = IR</B></NOBR>.
Resistors can thus be used to limit current I given voltage E:
<NOBR>(I = E/R),</NOBR> or to produce voltage E from current I:
<NOBR>(E = IR).</NOBR>
Two resistors in
series
can divide voltage Ein to produce the output voltage Eo:
<NOBR>( Eo = Ein(R1/(R1+R2)) ).</NOBR>
<P>Also see
<A HREF = "#Capacitor">capacitor</A> and
<A HREF = "#Inductor">inductor</A>.
<A NAME = "Ring"></A>
<P><DT><B>Ring</B>
<DD>In abstract algebra, a nonempty set R with two dyadic
(two-input, one-output) operations which we choose to call
"addition" and "multiplication" and denote + and * as usual.
If elements (not necessarily numbers) a, b are in R, then a+b is
in R, and ab (or a*b) are also in R. The following properties
hold:
<OL>
<LI><B>Addition is commutative:</B> a + b = b + a
<LI><B>Addition is associative:</B> (a + b) + c = a + (b + c)
<LI><B>There is a "zero" or additive identity:</B> a + 0 = a
<LI><B>There is an additive inverse:</B> for any a there is an
x in R such that a + x = 0
<LI><B>Multiplication is associative:</B> (ab)c = a(bc)
<LI><B>Multiplication is distributive:</B> a(b + c) = ab + ac and
(b + c)a = ba + ca
<LI><B>In a commutative ring, multiplication is commutative:</B>
ab = ba
<LI><B>In a ring with unity, there is a multiplicative identity:</B>
for e in R, ea = ae = a
</OL>
<A NAME = "Root"></A>
<P><DT><B>Root</B>
<DD>A solution: A value which, when substituted for a variable in
a mathematical equation, makes the statement true.
<A NAME = "RMS"></A>
<P><DT><B>RMS</B>
<DD><A HREF = "#RootMeanSquare">root mean square</A>.
<A NAME = "RootMeanSquare"></A>
<P><DT><B>Root Mean Square</B>
<DD>The square root of the integral of instantaneous values squared.
Thus, when measuring
<A HREF = "#Voltage">voltage</A> or
<A HREF = "#Current">current</A>, a value proportional to the average
<A HREF = "#Power">power</A> in watts, even in a complex waveform.
<A NAME = "RNG"></A>
<P><DT><B>RNG</B>
<DD>
<A HREF = "#RandomNumberGenerator">Random Number Generator</A>.
<A NAME = "Round"></A>
<P><DT><B>Round</B>
<DD>In the context of
<A HREF = "#BlockCipher">block cipher</A> design, a term often
associated with a
<A HREF = "#FeistelConstruction">Feistel</A> block cipher such as
<A HREF = "#DES">DES</A>. A round is the set of
operations which are repeated multiple times to produce the
final data. For example, DES uses 16 generally identical
rounds, each of which performs a number of operations.
As opposed to a
<A HREF = "#Layer">layer</A>, which is not applied repeatedly.
<A NAME = "RSA"></A>
<P><DT><B>RSA</B>
<DD>The name of an algorithm published by Ron Rivest, Adi Shamir,
and Len Adleman (thus, R.S.A.). The first major
<A HREF = "#PublicKeyCipher">public key</A> system.
<P>Based on number-theoretic concepts and using huge numerical
values, a RSA key must be perhaps ten times or more as long as
a secret key for similar security.
<A NAME = "RunningKey"></A>
<P><DT><B>Running Key</B>
<DD>The
<A HREF = "#ConfusionSequence">confusion sequence</A> in a
<A HREF = "#StreamCipher">stream cipher</A>.
<A NAME = "Salt"></A>
<P><DT><HR><P><B>Salt</B>
<DD>An unnecessarily cute and sadly non-descriptive name for an
arbitrary value, unique to a particular computer or installation,
prepended to a password before
<A HREF = "#Hash">hash</A> authentication. The "salt" acts to
complicate attacks on the password user-identification process by
giving the same password different hash results on different systems.
Ideally, this would be a sort of
<A HREF = "#Key">keying</A> for a secure hash.
<A NAME = "Sample"></A>
<P><DT><B>Sample</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, one or more elements,
typically drawn at
<A HREF = "#Random">random</A> from some
<A HREF = "#Population">population</A>.
<P>Normally, we cannot hope to examine the full population, and so
must instead investigate <I>samples</I> of the population, with the
hope that they represent the larger whole.
Often, random sampling occurs "without replacement"; effectively,
each individual sample is returned to the population before the next
sample is drawn.
<A NAME = "S-Box"></A>
<P><DT><B>S-Box</B>
<DD><A HREF = "#Substitution">Substitution</A> box or
<A HREF = "#SubstitutionTable">table</A>; typically a
<A HREF = "#Component">component</A> of a cryptographic
<A HREF = "#System">system</A>. "S-box" is a rather
non-specific term, however, since S-boxes can have more inputs than
outputs, or more outputs than inputs, each of which makes a single
invertible table impossible. The S-boxes used in
<A HREF = "#DES">DES</A>
contain multiple invertible substitution tables, with the particular
table used at any time being data-selected.
<P>One possible S-box is the identity transformation (0->0, 1->1,
2->2, ...) which clearly has no effect at all, while every other
transformation has at least some effect. So different S-boxes
obviously <I>can</I> contain different amounts of some qualities.
Qualities often mentioned include
<A HREF = "#Avalanche">avalanche</A> and
<A HREF = "#BooleanFunctionNonlinearity">Boolean function
nonlinearity</A>. However, one might expect that different ciphering
structures will need <I>different</I> table characteristics to a
greater or less degree. So the discussion of S-box
<A HREF = "#Strength">strength</A> always occurs within the context
of a particular
<A HREF = "#Cipher">cipher</A> construction.
<H4>S-Box Avalanche</H4>
<P>With respect to avalanche, any input change -- even one bit -- will
select a different table entry. Over all possible input values and
changes, the number of output bits changed will have a
<A HREF = "#BinomialDistribution">binomial distribution</A>.
(See the
<A HREF = "http://www.io.com/~ritter/JAVASCRP/BINOMPOI.HTM#BitChanges">bit changes</A>
section of the
<A HREF = "http://www.io.com/~ritter/#JavaScript">Ciphers By Ritter /
JavaScript</A> computation pages.) So, in this respect, all tables
are equal.
<P>On the other hand, it is possible to arrange tables so that
single-bit input changes are guaranteed to produce at least two-bit
output changes, and this would seem to improve avalanche. But we
note that this is <I>probable</I> even with a
randomly-constructed table, so we have to ask just how much this
guarantee has improved things. In a Feistel cipher, it seems like
this <I>might</I> reduce the number of needed
<A HREF = "#Round">rounds</A> by one. But in actual operation,
the plaintext block is generally <I>randomized,</I> as in
<A HREF = "#CBC">CBC-mode</A>. This means that the probability of
getting a single-bit change in operation is very low anyway.
<P>It is true that cipher avalanche is tested using single-bit input
changes, and that is the way avalanche is defined. The point of this
is to assure that every output bit is "affected" by every input bit.
But I see this as more of an experimental requirement than an
operational issue that need be optimized.
<H4>S-Box Nonlinearity</H4>
<P>With respect to
<A HREF = "#BooleanFunctionNonlinearity">Boolean function nonlinearity</A>,
as tables get larger it becomes very difficult -- and essentially
impossible -- to find tables with ideal nonlinearity values. This means
that we are always accepting a compromise value, and this is
especially the case if the table must also have high values of
other S-box qualities.
<P>Even randomly-constructed tables tend to have reasonable
nonlinearity values. We might expect an 8-bit table to have a
nonlinearity of about 100 (that is, 100 bits must change in one of
the eight 256-bit output functions to reach the closest affine
Boolean function). Experimental measurement of the nonlinearity of
1,000,000 random 8-bit tables shows exactly one table with a
nonlinearity as low as 78, and the computed probability of an
actually <I>linear</I> table (nonlinearity zero) is something like
10<SUP>-72</SUP> or 2<SUP>-242</SUP>.
<P>The NSA-designed 8-bit table in Skipjack cipher has a computed
nonlinearity of 104. While not quite the highest value we could
find, it <I>is</I> in the top 2.5 percent of the distribution, and it
seems improbable that this occurred by accident. We might assume
that this table is representative of the modern understanding of the
needs of a Feistel design with a fixed table. If so, we might
conclude that good nonlinearity (or something very much like it) is
a necessary, if not quite sufficient, part of the design.
<H4>Keyed S-Boxes</H4>
<P>It is "easy" to construct
<A HREF = "#Key">keyed</A> S-boxes, by
<A HREF = "#Shuffle">shuffling</A> under the control of a keyed
cryptographic
<A HREF = "#RandomNumberGenerator">random number generator</A>.
(See, for example:
<A HREF = "http://www.io.com/~ritter/KEYSHUF.HTM">A Keyed
Shuffling System for Block Cipher Cryptography</A>.) This has the
significant advantage of providing no fixed tables for The Opponent
to understand and attack.
<P>One question is whether one should attempt to measure and discard
tables with poorer qualities than others. My personal feeling is
that the ciphering structure should be strong enough to handle the
expected random table distribution without added measurement and
selection.
<P>Also see:
<A HREF = "http://www.io.com/~ritter/RES/SBOXDESN.HTM">S-Box Design:
A Literature Survey</A>, in the
<A HREF = "http://www.io.com/~ritter/#LiteratureSurveys">Literature
Surveys and Reviews</A> section of the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page.
<A NAME = "Scalable"></A>
<P><DT><B>Scalable</B>
<DD>A
<A HREF = "#Cipher">cipher</A> design which can produce both
large real ciphers
and tiny experimental versions from the exact same construction
rules. Scalability is about more than just variable size:
Scalability is about establishing a uniform structural identity
which is size-independent, so that we achieve a strong cipher
near the top, and a tiny but accurate model that we can
investigate near the bottom.
<P>While full-size ciphers can never be exhaustively tested, tiny
cipher models <I>can</I> be approached experimentally, and any
flaws in them probably will be present in the full-scale versions
we propose to use.
Just as mathematics works the same for numbers large or small, a
<A HREF = "#BackDoor">backdoor</A> cipher built from fixed
construction rules must have the same sort of backdoor, whether
built large or small.
<P>For
<A HREF = "#BlockCipher">block ciphers</A>, the real block size
must be at least 128 bits, and the experimental block size
probably should be between 8 and 16 bits.
Such tiny ciphers can be directly compared to keyed
<A HREF = "#SubstitutionTable">substitution tables</A>
of the same size, which are the ideal theoretical model of a
block cipher.
<P>Potentially, scalability does far more than just <I>simplify</I>
testing: Scalability is an enabling technology that supports
experimental analysis which is otherwise <I>impossible.</I>
<A NAME = "Secrecy"></A>
<P><DT><B>Secrecy</B>
<DD>One of the objectives of
<A HREF = "#Cryptography">cryptography</A>: Keeping private
information private. Also see:
<A HREF = "#Trust">trust</A>.
<P>In a
<A HREF = "#SecretKeyCipher">secret key cipher</A>, secrecy
implies the use of a
<A HREF = "#Strength">strong</A>
<A HREF = "#Cipher">cipher</A>. Secrecy in communication
requires the
<A HREF = "#Security">secure</A> distribution of secret
<A HREF = "#Key">keys</A> to both ends (this is the
<A HREF = "#KeyDistributionProblem">key distribution problem</A>).
<P>In a
<A HREF = "#PublicKeyCipher">public key cipher</A>, the ability
to expose
<A HREF = "#Key">keys</A> apparently solves the
<A HREF = "#KeyDistributionProblem">key distribution problem</A>.
But communications secrecy requires that public keys be
<A HREF = "#Authentication">authenticated</A> (<I>certified</I>)
as belonging to their supposed owner. This must occur to
cryptographic levels of assurance, because failure leads to
immediate vulnerability under a
<A HREF = "#ManInTheMiddleAttack">man-in-the-middle attack</A>.
The possibility of this sort of
<A HREF = "#Attack">attack</A> is very disturbing, because it
needs little computation, and does not involve
<A HREF = "#Break">breaking</A> any cipher, which makes all
discussion of
<A HREF = "#Cipher">cipher</A>
<A HREF = "#Strength">strength</A> simply irrelevant.
<A NAME = "SecretCode"></A>
<P><DT><B>Secret Code</B>
<DD>A <A HREF = "#Code">coding</A> in which the correspondence
between symbol and code value is kept secret.
<A NAME = "SecretKeyCipher"></A>
<P><DT><B>Secret Key Cipher</B>
<DD>Also called a
<A HREF = "#SymmetricCipher">symmetric cipher</A> or
<A HREF = "#ConventionalCipher">conventional cipher</A>. A
<A HREF = "#Cipher">cipher</A> in which the exact same
<A HREF = "#Key">key</A> is
used to encipher a message, and then decipher the resulting
<A HREF = "#Ciphertext">ciphertext</A>. As opposed to a
<A HREF = "#PublicKeyCipher">public key cipher</A>.
<A NAME = "Security"></A>
<P><DT><B>Security</B>
<DD>Protection of a vital quality (such as secrecy, or safety, or
even wealth) from infringement, and the resulting relief from fear
and anxiety. The ability to engage and defeat attempts to damage,
weaken, or destroy a vital quality.
Security, in the form of assuring the secrecy of information while
in storage or transit, is the fundamental role of
<A HREF = "#Cryptography">cryptography</A>.
<P>A secure cryptosystem physically or logically <I>prevents</I>
unauthorized disclosure of its protected data. This is
<I>independent</I> of whether the attacker is a government agent,
a criminal, a private detective, some corporate security person,
or a friend of an ex-lover. Real security does not care
<I>who</I> the attacker is, or <I>what</I> their motive may be, but
instead protects against the threat itself. Limited security, on
the other hand, often seeks to guess the identity, capabilities and
motives of the attacker, and concentrates resources at those points.
<P>There is, of course, no <I>absolute</I> security. But we can
have real security against particular, defined threats. Also see:
<A HREF = "#Strength">strength</A>.
<A NAME = "SecurityThroughObscurity"></A>
<P><DT><B>Security Through Obscurity</B>
<DD>A phrase which normally refers to inventing a new
<A HREF = "#Cipher">cipher</A> which
is supposedly
<A HREF = "#Strength">strong</A>, then keeping the cipher secret so
it "cannot be attacked." One problem with this strategy is that
it prevents public review of the cipher design, which means that the
cipher may have serious weaknesses. And it may be much easier for The
<A HREF = "#Opponent">Opponent</A> to obtain the supposedly secret
ciphering program than it would be to break a serious cipher
(see <A HREF = "#Kerckhoff2">Kerckhoff's second requirement</A>).
<P>On the other hand, it can be a mistake to use even a public
and well-reviewed cipher, if the cipher protects enough valuable
information to support a substantial investment in analysis and
equipment to break the cipher.
A reasonable alternative is to select from among a wide variety of
conceptually different ciphers, each of which thus carries far less
information of far less value and so may not warrant a substantial
attack investment.
<A NAME = "Semiconductor"></A>
<P><DT><B>Semiconductor</B>
<DD>A material which is between
<A HREF = "#Conductor">conductor</A> and
<A HREF = "#Insulator">insulator</A> with respect to ease of
electron flow. The obvious examples are silicon and germanium.
<A NAME = "Semigroup"></A>
<P><DT><B>Semigroup</B>
<DD>A
<A HREF = "#Set">set</A> with an
<A HREF = "#Associative">associative</A>
<A HREF = "#Dyadic">dyadic</A> operation which happens to be
<A HREF = "#Closed">closed</A>.
<A NAME = "SessionKey"></A>
<P><DT><B>Session Key</B>
<DD>A
<A HREF = "#Key">key</A> which lasts for the period of a work
"session." A
<A HREF = "#MessageKey">message key</A> used for multiple messages.
<A NAME = "Set"></A>
<P><DT><B>Set</B>
<DD>A collection of distinguishable elements, usually, but not
necessarily, numbers.
<A NAME = "ShiftRegister"></A>
<P><DT><B>Shift Register</B>
<DD>An array of storage elements in which the values in each element
may be "shifted" into an adjacent element. (A new value is shifted
into the "first" element, and the value in the "last" element is
normally lost, or perhaps captured off-chip.) (See
<A HREF = "#LFSR">LFSR</A>.)
<PRE>
Right-Shifting Shift Register (SR)
+----+ +----+ +----+
Carry In -->| A0 |->| A1 |-> ... ->| An |--> Carry Out
+----+ +----+ +----+
</PRE>
<P>In
<A HREF = "#Digital">digital</A>
<A HREF = "#Hardware">hardware</A> versions, elements are
generally
<A HREF = "#Bit">bits</A>, and the stored values actually move
from element to element in response to a
<A HREF = "#Clock">clock</A>.
<A HREF = "#Analog">Analog</A> hardware versions include the
charge-coupled devices (CCD's) used in cameras, where the analog
values from lines of sensors are sampled in parallel, then
serialized and stepped off the chip to be digitized and processed.
<P>In
<A HREF = "#Software">software</A> versions, elements are often
<A HREF = "#Byte">bytes</A> or larger values, and the values may
not actually move during stepping. Instead, the values may reside
in a circular array, and one or more offsets into that array may
step. In this way, even huge amounts of
<A HREF = "#State">state</A> can be "shifted" by changing a
single index or pointer.
<A NAME = "Shuffle"></A>
<P><DT><B>Shuffle</B>
<DD>Generally, the concept of "mixing up" a set of objects,
symbols or elements, as in shuffling cards. Mathematically,
each possible arrangement of elements is a particular
<A HREF = "#Permutation">permutation</A>.
<P>Within a
<A HREF = "#Computer">computer</A> environment, it is easy to
shuffle an arbitrary number of symbols using a
<A HREF = "#RandomNumberGenerator">random number generator</A>,
and the algorithm of Durstenfeld, which is described in Knuth II:
<BLOCKQUOTE>
<P>Durstenfeld, R. 1964. Algorithm 235, Random Permutation,
Procedure SHUFFLE. <I>Communications of the ACM.</I> 7: 420.
<P>Knuth, D. 1981. <I>The Art of Computer Programming,</I>
Vol. 2, </I>Seminumerical Algorithms.</I> 2nd ed. 139.
Reading, Mass: Addison-Wesley.
</BLOCKQUOTE>
<A NAME = "SieveOfEratosthenes"></A>
<P><DT><B>Sieve of Eratosthenes</B>
<DD>A way to find relatively small
<A HREF = "#Prime">primes</A>. Although small
primes are less commonly useful in
<A HREF = "#Cryptography">cryptography</A>
than large (say, 100+ digit) primes, they <I>can</I> at least
help to validate implementations of the procedures used to
find large primes.
<P>Basically, the "Sieve of Eratosthenes" starts out with a table
of numbers from 1 to some limit, all of which are potential primes,
and the knowledge that 2 is a prime. Since 2 is a prime, no other
prime can have 2 as a factor, so we run though the table discarding
all multiples of 2. The next remaining number above 2 is 3, which
we accept as a prime, and then run through the table crossing off all
multiples of 3. The next remaining is 5, so we cross off all
multiples of 5, and so on. After we cross-off each prime up to
the square-root of the highest value in the table, the table will
contain only primes.
<P>A similar process works with small
<A HREF = "#Polynomial">polynomials</A>, and small polynomial
<A HREF = "#Field">fields</A>, to find
<A HREF = "#Irreducible">irreducible</A> polynomials.
<A NAME = "Significance"></A>
<P><DT><B>Significance</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the probability of
committing a
<A HREF = "#TypeIError">type I error</A>, the rejection of a true
<A HREF = "#NullHypothesis">null hypothesis</A>.
Given the probability distribution of the test
<A HREF = "#Statistic">statistic</A> for the case "nothing
unusual found," that area which is sufficiently unlikely that values
in this <I>critical region</I> would lead to rejecting the
null hypothesis, and thus accepting the
<A HREF = "#AlternativeHypothesis">alternative hypothesis</A>.
<A NAME = "SimpleSubstitution"></A>
<P><DT><B>Simple Substitution</B>
<DD>A type of
<A HREF = "#Substitution">substitution</A> in which each possible
symbol is given a unique replacement symbol.
<P>Perhaps the original classical form of
<A HREF = "#Cipher">cipher</A>, in which each
<A HREF = "#Plaintext">plaintext</A> character is
<A HREF = "#Encipher">enciphered</A> as some different character.
In essence, the order of the alphabet is scrambled or
<A HREF = "#Permutation">permuted</A>, and the
particular scrambled order (or the scrambling process which
creates that particular order) is the cipher
<A HREF = "#Key">key</A>. Normally we
think of scrambling alphabetic letters, but any
<A HREF = "#Computer">computer</A>
<A HREF = "#Code">coding</A> can be scrambled similarly.
<P>Small, practical examples of simple substitution are easily
realized in
<A HREF = "#Hardware">hardware</A> or
<A HREF = "#Software">software</A>. In software, we can have a
table of values each of which can be indexed or selected by
element number. In hardware, we can simply have addressable
memory. Given an index value, we can select the element at the
index location, and read or change the value of the selected
element.
<P>A
<A HREF = "#SubstitutionTable">substitution table</A>
will be initialized to contain exactly one
occurrence of each possible symbol or character. This allows
enciphering to be reversed and the
<A HREF = "#Ciphertext">ciphertext</A>
<A HREF = "#Decipher">deciphered</A>. For
example, suppose we substitute a two-bit quantity, thus a value
0..3, in a particular table as follows:
<PRE>
2 3 1 0.
</PRE>
<P>The above substitution table takes an input value to an output
value by selecting a particular element. For example, an input
of 0 selects 2 for output, and an input of 2 selects 1. If this
is our enciphering, we can decipher with an inverse table.
Since 0 is enciphered as 2, 2 must be deciphered as 0, and since
2 is enciphered as 1, 1 must be deciphered as 2, with the whole
table as follows:
<PRE>
3 2 0 1.
</PRE>
<P>Mathematically, a simple substitution is a
<A HREF = "#Mapping">mapping</A> (from input
to output) which is
<A HREF = "#OneToOne">one-to-one</A> and
<A HREF = "#Onto">onto</A>, and is therefore
<A HREF = "#Invertible">invertible</A>.
<A NAME = "Software"></A>
<P><DT><B>Software</B>
<DD>The description of a logic machine. The original textual
composition is called
<A HREF = "#SourceCode">source code</A>, the file of compiled
<A HREF = "#Opcode">opcode</A> values is called
<A HREF = "#ObjectCode">object code</A>, and the final linked
result is pure "machine code" or
<A HREF = "#MachineLanguage">machine language</A>
Note that, by itself, software does not and can not function;
but instead relies upon
<A HREF = "#Hardware">hardware</A> for all functionality.
When "software" is running, there is no software there:
there is only hardware memory, with hardware
<A HREF = "#Bit">bits</A> which can be
sensed and stored, hardware counters and registers, and hardware
<A HREF = "#Digital">digital</A>
<A HREF = "#Logic">logic</A> to make decisions. See:
<A HREF = "#Computer">computer</A>,
<A HREF = "#System">system</A>,
<A HREF = "#SystemDesign">system design</A>, and
<A HREF = "#Debug">debug</A>.
<A NAME = "SourceCode"></A>
<P><DT><B>Source Code</B>
<DD>The textual representation of a
<A HREF = "#Computer">computer</A> program as it is
written by a programmer. Nowadays, source is typically in a
high-level language like C, C++ or Pascal, but inevitably some
programmers must work "close to the machine" in assembly language.
The
"<A HREF = "#Code">code</A>" part of this is presumably an
extension of the idea that, ultimately, all computer programs are
executed as "machine code" or
<A HREF = "#MachineLanguage">machine language</A>. This consists
of numeric values or "operation codes"
("<A HREF = "#Opcode">opcodes</A>") which select
the instruction to be executed, and so represent a very public
<I>code</I> for those instructions. Also see
<A HREF = "#ObjectCode">object code</A>.
<A NAME = "State"></A>
<P><DT><B>State</B>
<DD>Information storage, or "memory." In abstract machine theory,
retained information, generally used to influence future events.
<P>In
<A HREF = "#Statistics">statistics</A>, the current symbol from
a sequence, or a value which selects or conditions possible
outcomes (see:
<A HREF = "#MarkovProcess">Markov process</A>).
<P>We normally measure "state" in units of information or
<A HREF = "#Bit">bits</A>, and 8 bits of "state" can support
2<SUP>8</SUP> or 256
different state-value combinations or <I>states</I>.
<P>Also see:
<A HREF = "#Deterministic">deterministic</A> and
<A HREF = "#Keyspace">keyspace</A>.
<A NAME = "StationaryProcess"></A>
<P><DT><B>Stationary Process</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, a
<A HREF = "#Stochastic">stochastic</A> (random)
<A HREF = "#Process">process</A> (function) whose general
<A HREF = "#Statistic">statistics</A> do not change over time;
in which every sub-sequence is representative of the whole;
a homogenous process. This may not be true of a
<A HREF = "#MarkovProcess">Markov process</A>. Also see:
<A HREF = "#Ergodic">ergodic</A>.
<A NAME = "Statistic"></A>
<P><DT><B>Statistic</B>
<DD>A computation or process intended to reduce diverse results into
a one-dimensional ordering of values for better understanding and
comparison. Also the value result of such a computation. See
<A HREF = "#Statistics">statistics</A>.
<P>A useful statistic will have some known (or at least explorable)
probability
<A HREF = "#Distribution">distribution</A> for the case "nothing
unusual found." This allows the statistic value to be interpreted
as the probability of finding that value or less, for the case
"nothing unusual found." Then, if improbable statistic values occur
repeatedly and systematically, we can infer that something unusual
<I>is</I> being found, leading to the rejection of the
<A HREF = "#NullHypothesis">null hypothesis</A>.
<P>It is also possible to explore different distributions for the
same statistic under different conditions. This can provide a way
to guess which condition was in force when the data were obtained.
<A NAME = "Statistics"></A>
<P><DT><B>Statistics</B>
<DD>The mathematical science of interpreting probability to
extract meaning from diverse results.
Also the analysis of a large
<A HREF = "#Population">population</A> based on a limited number of
<A HREF = "#Random">random</A>
<A HREF = "#Sample">samples</A> from that population; this is also
the ability to state probability bounds for the correctness of
certain types of
<A HREF = "#InductiveReasoning">inductive reasoning</A>. See
<A HREF = "#Statistic">statistic</A> and
<A HREF = "#RandomVariable">random variable</A>.
<P>The usual role of statistics is to identify particular systematic
events in the context of expected random variations that may conceal
such events. This often occurs in a context of difficult and costly
experimentation, and there is a premium on results which are so good
that they stand above the noise; it may be that not much is lost if a
weak positive is ignored.
<P>In contrast, cryptography and randomness generally support vast
amounts of testing at low cost, and we seek weak indications.
In this context, we may find it more useful to conduct many tests
and collect many statistic values, then visually and mathematically
compare the experimental distribution to the ideal for that statistic.
<P>A statistical
<A HREF = "#Distribution">distribution</A> usually represents what we
should expect from random data or random sampling. If we have random
data, statistic values exceeding 95% of the distribution (often
called <I>failure</I>) <I>should</I> occur about 1 time in 20.
And since that one time may happen on the very first test, it is
only prudent to conduct many tests and accumulate results which are
more likely to represent reality than any one result from a single
test.
<P>In statistical randomness testing, "failure" should and <I>must</I>
occur with the appropriate frequency. Thus, the failure to fail is
itself a failure! This means that the very concept of statistical
"failure" often may be inappropriate for cryptographic use. Grading
a result as "pass" or "fail" discards all but one
<A HREF = "#Bit">bit</A> of information. Further,
a pass / fail result is a
<A HREF = "#BernoulliTrials">Bernoulli trial</A>, which would take
many, many similar tests to properly characterize. So it may be more
appropriate to collect 20 or more statistic probability values, and
then compare the accumulation to the expected distribution for that
statistic. This will provide a substantial basis for asserting that
the sampled process either did or did not produce the same
<A HREF = "#Statistic">statistic</A>
<A HREF = "#Distribution">distribution</A> as a random process.
<P>Due to random sampling, any statistical result is necessarily
a <I>probability,</I> rather than certainty. An "unlucky" sampling
can produce statistical results which imply the opposite of reality.
In general, statistics simply <B>cannot</B> provide the 100 percent
certainty which is traditionally expected of mathematical "proof."
<A NAME = "Steganography"></A>
<P><DT><B>Steganography</B>
<DD>Greek for "sheltered writing." Methods of
<A HREF = "#Cryptology">cryptology</A> which seek to conceal
the <I>existence</I> of a message. As opposed to
<A HREF = "#Cryptography">cryptography</A> which seeks to hide
the <I>information</I> in the message, even if the message itself
is completely exposed.
<A NAME = "Stochastic"></A>
<P><DT><B>Stochastic</B>
<DD>In
<A HREF = "#Statistics">statistics</A>,
<A HREF = "#Random">random</A>; involving a
<A HREF = "#RandomVariable">random variable</A>.
<A NAME = "StreamCipher"></A>
<P><DT><B>Stream Cipher</B>
<DD>A
<A HREF = "#Cipher">cipher</A> which directly handles messages
of arbitrary size,
by ciphering individual elements, such as
<A HREF = "#Bit">bits</A> or
<A HREF = "#Byte">bytes</A>. This
avoids the need to accumulate data into a
<A HREF = "#Block">block</A> before ciphering,
as is necessary in a conventional
<A HREF = "#BlockCipher">block cipher</A>. But note that a stream
cipher can be seen as an
<A HREF = "#OperatingMode">operating mode</A>, a "streaming"
of a tiny block transformation. Stream ciphers can be called
"<A HREF = "#Combiner">combiner</A>-style" ciphers.
<H4>Stream Cipher Diffusion</H4>
<P>In a conventional stream cipher, each element (for example,
each byte) of the message is ciphered independently, and does
not affect any other element.
<P>In a few stream cipher designs, the value of one message byte
may change the enciphering of <I>subsequent</I> message bytes;
this is forward data
<A HREF = "#Diffusion">diffusion</A>. But a stream cipher
<B>cannot</B> change the enciphering of <I>previous</I> message
bytes. In contrast, changing even the last bit in a block cipher
block will generally change about half of the earlier bits within
that same block. Changing a bit in one block may even affect
later blocks if we have some sort of stream meta-cipher composed
of block cipher transformations, like
<A HREF = "#CBC">CBC</A>.
<P>Note that a stream cipher generally does not need data
diffusion for strength, as does a block cipher. In a block
cipher, it may be possible to separate individual components
of the cipher if their separate effects are not hidden by
diffusion. But a stream cipher generally re-uses the same
transformation, and has no multiple data components to hide.
<H4>Stream Cipher Construction</H4>
<P>The classic stream cipher is very simple, consisting of a
<A HREF = "#Key">keyed</A>
<A HREF = "#RandomNumberGenerator">random number generator</A>
which produces a random-like
<A HREF = "#ConfusionSequence">confusion sequence</A> or
<A HREF = "#RunningKey">running key</A>.
That sequence is then combined with
<A HREF = "#Plaintext">plaintext</A> data in a simple
additive <A HREF = "#Combiner">combiner</A> to produce
<A HREF = "#Ciphertext">ciphertext</A>.
<P>When an exclusive-OR combiner is used, exactly the same
construction will also decipher the ciphertext. But if The
Opponents have some
<A HREF = "#KnownPlaintextAttack">known-plaintext</A> and
associated ciphertext, they can easily produce the original
confusion sequence. This, along with their expected knowledge
of the cipher design, may allow them to attack and expose the
confusion generator. If this is successful, it will, of course,
break the system until the RNG is re-keyed.
<P>The ultimate stream cipher is the
<A HREF = "#OneTimePad">one-time pad</A>, in which a
<A HREF = "#ReallyRandom">really random</A> sequence is never
re-used. But if a sequence <I>is</I> re-used, The Opponent can
generally combine the two ciphertexts, eliminating the confusion
sequence, and producing the combined result of two plaintexts.
Such a combination is normally easy to attack and penetrate.
<P>The re-use of confusion sequence is extremely dangerous in a
stream cipher design. In general, all stream cipher designs
<I>must</I> use a
<A HREF = "#MessageKey">message key</A> to assure that the cipher
is keyed with a random value for every new ciphering. This does,
of course, expand the ciphertext by the size of the message key.
<P>Another alternative in stream cipher design is to use a
stronger combiner, such as
<A HREF = "#LatinSquareCombiner">Latin square</A> or
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution</A>
combining.
This can drastically reduce the complexity required in the
confusion generator, which normally provides all stream cipher
strength. Each of these stronger combiners is nonlinear, with
substantial internal
<A HREF = "#State">state</A>, and the designer may elect to use
multiple combinings in sequence, or a selection among different
combiners. Neither of these approaches make much sense with an
<A HREF = "#AdditiveCombiner">additive combiner</A>.
<A NAME = "Strength"></A>
<P><DT><B>Strength</B>
<DD>The ability of a
<A HREF = "#Cipher">cipher</A> to resist
<A HREF = "#Attack">attack</A> and maintain
<A HREF = "#Secrecy">secrecy</A>.
The overall "strength" of a cipher is the minimum effort required to
<A HREF = "#Break">break</A> the cipher, by any possible attack.
But our <I>knowledge</I> of cipher "strength" is necessarily
<A HREF = "#Contextual">contextual</A> and
<A HREF = "#Subjective">subjective</A>, much like
<I>unpredictability</I> in
<A HREF = "#Random">random</A> sequences.
Although "strength" would seem to be the entire point of using a
cipher,
<A HREF = "#Cryptography">cryptography</A> has no way to measure
strength.
<P>Cipher "strength" is often taken as an absolute universal
<I>negative,</I> the simple <I>non-existence</I> of any attack
which could succeed, assuming some level of attack resources.
But this means that overall "strength" may be forever impossible
to measure, because there is no hope of enumerating and evaluating
<I>every possible</I> attack.
<H4>Strength and Cryptanalysis</H4>
<P>Because we have no tools for the discussion of strength under
all possible
<A HREF = "#Attack">attacks</A>, cipher "strength" is normally
discussed in the context of particular attacks. Each known attack
approach can be elaborated for a particular cipher, and a value
calculated for the effort required to break the cipher in that way;
this may set an "upper bound" on the unknown strength of the cipher
(although some "elaborations" are clearly better than others). And
while this is certainly better than not knowing the strength with
respect to known attacks, such attacks may not represent the
actual threat to the cipher in the field. (A cipher may even be said
to have <I>different</I> "contextual strengths," depending on the
knowledge available to different
<A HREF = "#Opponent">Opponents</A>.) In general, we never know the
"lower bound" or "true" strength of a cipher. So, unless a cipher
is shown to be weaker than we can accept,
<A HREF = "#Cryptanalysis">cryptanalysis</A> provides no useful
information about cipher strength.
<P>It is sometimes argued that "our guys" are just as good as the
Opponents, who thus could not break a cipher with less effort than
we know. Or it is said that if a better break were known, that
secret necessarily would get out. When viewed in isolation such
statements are clearly false reasoning, yet these are the sort of
assumptions that are often implicitly used to assert strength after
<A HREF = "#Cryptanalysis">cryptanalysis</A>.
<P>Since we cannot know the true situation, for a proper
<A HREF = "#Security">security</A> analysis we must instead assume
that our Opponents have more time, are better trained, are better
equipped, and may even be <I>smarter</I> than our guys. Further,
the Opponents are quite likely to function as a well-motivated
group with a common goal and which can keep secrets; clearly, this
is a far different situation than the usual academic cryptanalysis.
So, again, cryptanalysis by "our guys" provides <B>no</B> information
about the strength of the cipher as seen by our Opponents.
<H4>Increasing Probable Strength and Reducing Possible Loss</H4>
<P>Technical strength is just one of the many possibilities for
weakness in a
<A HREF = "#Cipher">cipher</A> system, and perhaps even the
least likely. It is surprisingly difficult to construct a cipher
system without "holes," despite using good ciphers, and The Opponents
get to exploit any overlooked problems.
Users must be educated in security, and must actively keep secrets
or there will be nothing to protect. In contrast,
<A HREF = "#Cryptanalysis">cryptanalysis</A> is very expensive,
success is never assured, and even many of the known
<A HREF = "#Attack">attacks</A> are essentially impossible in
practice.
<P>Nevertheless, it is a disturbing fact that we do not know and
cannot guarantee a "true" strength for any cipher. But there
<I>are</I> approaches which may reduce the probability of
technical weakness and the extent of any loss:
<OL>
<LI>We can <I>extrapolate</I> various attacks beyond weakness
levels actually shown, and thus possibly avoid some weak
ciphers.
<LI>We can use systems that change ciphers periodically. This
will reduce the amount of information under any one cipher,
and so limit the damage if that cipher is weak.
<LI>We can use
<A HREF = "#MultipleEncryption">multiple encryption</A> with
different keys and different ciphers as our standard mode.
In this way, not just one but multiple ciphers must each be
penetrated simultaneously to expose the protected data.
<LI>We can use systems that allow us to stop using ciphers when
they are shown weak, and switch to others.
</OL>
<H4>Kinds of Cipher Strength</H4>
<P>In general, we can consider a
<A HREF = "#Cipher">cipher</A> to be a large
<A HREF = "#Key">key</A>-selected transformation between
<A HREF = "#Plaintext">plaintext</A> and
<A HREF = "#Ciphertext">ciphertext</A>, with two main types of
strength:
<UL>
<LI>One type of "strength" is an inability to extrapolate from
known parts of the transformation (e.g.,
<A HREF = "#KnownPlaintextAttack">known plaintext</A>) to
model -- or even approximate -- the transformation at new points
of interest (message ciphertexts).
<LI>Another type of "strength" is an inability to develop a
particular key, given the known cipher and a large number of
known transformation points.
</UL>
<H4>Views of Strength</H4>
<P>Strength is the effectiveness of fixed defense in the
<A HREF = "#CryptographyWar">cryptography war</A>. In real war,
a strong defense might be a fortification at the top of a mountain
which could only be approached on a single long and narrow path.
Unfortunately, in real military action, time after time, making
assumptions about what the opponent "could not" do turned out to be
deadly mistakes. In cryptography we can at least imagine that
someday we might <I>prove</I> that all approaches but one are
actually <I>impossible,</I> and then guard that last approach; see
<A HREF = "#MathematicalCryptography">mathematical cryptography</A>.
<H4>The Future of Strength</H4>
<P>It is sometimes convenient to see security as a fence around a
restricted compound: We can beef up the front gate, and in some way
measure that increase in "strength." But none of that matters if
someone cuts through elsewhere, or tunnels under, or jumps over.
Until we can produce a cipher design which reduces all the possible
avenues of attack to exactly one, it will be very difficult to
measure "strength."
<P>One possibility might be to construct ciphers in
<A HREF = "#Layer">layers</A> of different puzzles:
Now, the obvious point of having multiple puzzles is to
require multiple solutions before the cipher is broken. But a perhaps
less obvious point is to set up the design so that the solution to one
puzzle requires The Opponent to <I>commit</I> (in an information sense)
in a way that <I>prevents</I> the solution to the next puzzle.
<P>Also see
<A HREF = "#DesignStrength">design strength</A>,
<A HREF = "#PerfectSecrecy">perfect secrecy</A>,
<A HREF = "#IdealSecrecy">ideal secrecy</A>, and
<A HREF = "#Security">security</A>.
<A NAME = "StrictAvalancheCriterion"></A>
<P><DT><B>Strict Avalanche Criterion (SAC)</B>
<DD>A term used in
<A HREF = "#S-Box">S-box</A>
analysis to describe the contents of an invertible
<A HREF = "#Substitution">substitution</A> or, equivalently, a
<A HREF = "#BlockCipher">block cipher</A>. If we have some input
value, and then change one bit in that value, we expect about half
the output bits to change; this is the
<A HREF = "#AvalancheEffect">avalanche effect</A>,
and is caused by an
<A HREF = "#Avalanche">avalanche</A> process.
The <I>Strict Avalanche Criterion</I> requires that each output bit
change with probability one-half (over all possible input starting
values). This is stricter than avalanche,
since if a <I>particular</I> half of the output bits changed
<I>all</I> the time, a strict interpretationist might call
<I>that</I> "avalanche." Also see
<A HREF = "#Complete">complete</A>.
<P>As introduced in Webster and Tavares:
<BLOCKQUOTE>
"If a cryptographic function is to satisfy the strict avalanche
criterion, then each output bit should change with a probability
of one half whenever a single input bit is complemented." [p.524]
</BLOCKQUOTE>
<P>Webster, A. and S. Tavares. 1985. On the Design of S-Boxes.
<I>Advances in Cryptology -- CRYPTO '85.</I> 523-534.
<P>Although the SAC has tightened the understanding of "avalanche,"
even SAC can be taken too literally. Consider the
<A HREF = "#Scalable">scaled-down</A>
block cipher model of a small invertible
<A HREF = "#Key">keyed</A>
<A HREF = "#SubstitutionTable">substitution table</A>:
Any input bit-change thus selects a different table element, and
so produces a random new value (over all possible keys). But when
we compare the new value with the old, we find that typically half
the bits change, and sometimes <I>all</I> the bits change, but
<I>never</I> is there no change at all. This is a tiny bias toward
change.
<P>If we have a 2-bit (4-element) table, there are 4 values, but
after we take one as the original, there are only 3 <I>changed</I>
values, not 4. We will see changes of 1 bit, 1 bit, and 2 bits.
But this is a change expectation of 2/3 for each output bit, instead
of exactly 1/2 as one might interpret from SAC. Although this bias
is clearly size-related, its source is <I>invertibility</I> and
the definition of <I>change.</I> Thus, even a large block cipher
<I>must</I> have <I>some</I> bias, though it is unlikely that we
could measure enough cases to see it. The point is that one can
extend some of these definitions well beyond their intended role.
<A NAME = "Subjective"></A>
<P><DT><B>Subjective</B>
<DD>In the study of
<A HREF = "#Logic">logic</A>, a particular <I>interpretation</I> of
reality, rather than
<A HREF = "#Objective">objective</A> reality itself.
<A NAME = "Substitution"></A>
<P><DT><B>Substitution</B>
<DD>The concept of replacing one symbol with another symbol. This
might be as simple as a grade-school lined sheet with the alphabet
down the left side, and a substitute listed for each letter.
In
<A HREF = "#Computer">computer</A> science this might be a simple
array of values, any
one of which can be selected by indexing from the start of the
array. See <A HREF = "#SubstitutionTable">substitution table</A>.
<P>Cryptography recognizes four types of substitution:
<UL>
<LI><A HREF = "#SimpleSubstitution">Simple Substitution</A> or
<A HREF = "#MonoalphabeticSubstitution">Monoalphabetic Substitution</A>,
<LI><A HREF = "#HomophonicSubstitution">Homophonic Substitution</A>,
<LI><A HREF = "#PolyalphabeticSubstitution">Polyalphabetic Substitution</A>, and
<LI><A HREF = "#PolygramSubstitution">Polygram Substitution</A>.
</UL>
<A NAME = "SubstitutionPermutation"></A>
<P><DT><B>Substitution-Permutation</B>
<DD>A method of constructing
<A HREF = "#BlockCipher">block ciphers</A> in which block elements
are
<A HREF = "#Substitution">substituted</A>, and the resulting bits
typically
<A HREF = "#Transposition">transposed</A> or scrambled into
a new arrangement. This would be one round of many.
<P>One of the advantages of S-P construction is that the
"permutation" stage can be simply a re-arrangement of wires,
taking almost no time. Such a stage is more clearly described as
a limited set of "transpositions," rather than the more general
"permutation" term. Since substitutions are <I>also</I> permutations
(albeit with completely different costs and effects), one might
fairly describe such a cipher as a "permutation-permutation cipher,"
which is not particularly helpful.
<P>A disadvantage of the S-P construction is the need for special
substitution patterns which support
<A HREF = "#Diffusion">diffusion</A>. S-P ciphers diffuse
bit-changes across the block round-by-round; if one of the
substitution table output bits does not change, then no change can
be conducted to one of the tables in the next round, which has the
effect of reducing the complexity of the cipher. Consequently,
special tables are required in S-P designs, but even special tables
can only reduce and not eliminate the effect. See
<A HREF = "#Complete">Complete</A>.
<A NAME = "SubstitutionTable"></A>
<P><DT><B>Substitution Table</B>
<DD>(Also
<A HREF = "#S-Box">S-box</A>.) A linear array of values, indexed
by position, which includes
any value at most once. In cryptographic service, we normally
use binary-power invertible tables with the same input and
output range. For example, a
<A HREF = "#Byte">byte</A>-substitution table will have
256 elements, and will contain each of the values 0..255 exactly
once. Any value 0..255 into that table will select some element
for output which will also be in the range 0..255.
<P>For the same range of input and output values, two invertible
substitution tables differ only in the order or
<A HREF = "#Permutation">permutation</A> of the values in the table.
There are 256
<A HREF = "#Factorial">factorial</A> different byte-substitution
tables, which is a
<A HREF = "#Keyspace">keyspace</A> of 1648 bits.
<P>A
<A HREF = "#Key">keyed</A>
<A HREF = "#SimpleSubstitution">simple substitution</A>
table of sufficient size is the ideal
<A HREF = "#BlockCipher">block cipher</A>.
Unfortunately, with 128-bit blocks being the modern minimum for
strength, there would be 2<SUP>128</SUP> entries in that table,
which is completely out of the question.
<P>A keyed substitution table of <I>practical</I> size can only be
thought of as a weak block cipher by itself, but it can be part of
a combination of
<A HREF = "#Component">components</A> which produce a stronger cipher.
And since an invertible substitution table is the ideal tiny block
cipher, it can be used for direct experimental comparison to a
<A HREF = "#Scalable">scalable</A>
block cipher of that same tiny size.
<A NAME = "Superencryption"></A>
<P><DT><B>Superencryption</B>
<DD>Usually the outer-level
<A HREF = "#Encryption">encryption</A> of a
<A HREF = "#MultipleEncryption">multiple encryption</A>.
Often relatively weak, relying upon the text randomization
effect of the lower-level encryption.
<A NAME = "Surjective"></A>
<P><DT><B>Surjective</B>
<DD><A HREF = "#Onto">Onto</A>.
A <A HREF = "#Mapping">mapping</A> f: <I>X -> Y</I> where
<I>f(x)</I> covers all elements in <I>Y.</I>
Not necessarily invertible, since multiple elements
<I>x</I> in <I>X</I> could produce the same <I>f(x)</I> in <I>Y.</I>
<A NAME = "Switch"></A>
<P><DT><B>Switch</B>
<DD>Classically, an electro-mechanical device which physically
presses two
<A HREF = "#Conductor">conductors</A> together at a contact point,
thus "making" a
<A HREF = "#Circuit">circuit</A>, and also pulls the conductors
apart, thus allowing air to
<A HREF = "#Insulator">insulate</A> them and thus "breaking" the
circuit.
More generally, something which exhibits a significant change in
some parameter between "ON" and "OFF."
<A NAME = "SwitchingFunction"></A>
<P><DT><B>Switching Function</B>
<DD>A <A HREF = "#LogicFunction">logic function</A>.
<A NAME = "SymmetricCipher"></A>
<P><DT><B>Symmetric Cipher</B>
<DD>A
<A HREF = "#SecretKeyCipher">secret key cipher</A>.
<A NAME = "SymmetricGroup"></A>
<P><DT><B>Symmetric Group</B>
<DD>The symmetric
<A HREF = "#Group">group</A> is the set of all
<A HREF = "#OneToOne">one-to-one</A>
<A HREF = "#Mapping">mappings</A> from a set into itself.
The collection of all
<A HREF = "#Permutation">permutations</A> of some set.
<P>Suppose we consider a
<A HREF = "#BlockCipher">block cipher</A> to be a key-selected
permutation of the
<A HREF = "#Block">block</A> values: One question of interest
is whether our cipher construction could, if necessary, reach
every possible permutation, the symmetric group.
<A NAME = "System"></A>
<P><DT><B>System</B>
<DD>An interconnecting network of
<A HREF = "#Component">components</A> which coordinate to perform
a larger function. Also a system of ideas. See
<A HREF = "#SystemDesign">system design</A>.
<A NAME = "SystemDesign"></A>
<P><DT><B>System Design</B>
<DD>The design of potentially complex
<A HREF = "#System">systems</A>.
<P>It is now easy to construct large
<A HREF = "#Hardware">hardware</A> or
<A HREF = "#Software">software</A> systems which are almost
unmanageably complex and never error-free. But a good design and
development approach can produce systems with far fewer problems.
One such approach is:
<P><OL>
<LI>Decompose the system into small, <I>testable</I>
components.
<LI>Construct and then <I>actually test</I> each of the
components individually.
</OL>
<P>This is both easier and harder than it looks: there are
<I>many</I> ways to decompose a large system, and finding an
effective and efficient decomposition can take both experience
and trial-and-error. But many of the possible decompositions
define components which are less testable or even <I>un</I>testable,
so the testability criterion greatly reduces the search.
<P>Testing is no panacea: we cannot hope to find all possible bugs
this way. But in practice we <I>can</I> hope to find 90 percent or
more of the bugs simply by <I>actually testing</I> each component.
(Component testing means that we are forced to <I>think</I> about
what each component does, and about its requirements and limits.
Then we have to make the realized component <I>conform</I> to those
tests, which were based on our theoretical concepts. This will often
expose problems, whether in the implementation, the tests, or the
concepts.) By testing all components, when we put the system
together, we can hope to avoid having to
<A HREF = "#Debug">debug</A> multiple independent problems
simultaneously.
<P>Other important system design concepts include:
<UL>
<LI>Build in test points and switches to facilitate run-time
inspection, control, and analysis.
<LI>Use repeatable comprehensive tests at all levels, and when
a component is "fixed," run those tests again.
<LI>Start with the most basic system and fewest components,
make that "work" (pass appropriate system tests), then
"add features" one-by-one. Try not to get too far before
making the expanded system work again.
</UL>
<A NAME = "TableSelectionCombiner"></A>
<P><DT><HR><P><B>Table Selection Combiner</B>
<DD>A
<A HREF = "#Combiner">combining</A>
<A HREF = "#Mechanism">mechanism</A> in which one input selects
a table or substitution alphabet, and another input selects a
value from within the selected table, said value becoming the
combined result. Also called a
<A HREF = "#PolyalphabeticCombiner">Polyalphabetic Combiner</A>.
<A NAME = "TEMPEST"></A>
<P><DT><B>TEMPEST</B>
<DD>Supposedly the acronym for "Transient Electromagnetic Pulse
Emanation Surveillance Technology."
Originally, the potential insecurity due to the
<A HREF = "#ElectromagneticField">electromagnetic</A> radiation
which inherently occurs when a
<A HREF = "#Current">current</A> flow changes in a
<A HREF = "#Conductor">conductor</A>. Thus, pulses from
<A HREF = "#Digital">digital</A>
<A HREF = "#Circuit">circuitry</A>
might be picked up by a receiver, and the
<A HREF = "#Plaintext">plaintext</A> data reconstructed.
The general concept can be extended to the idea that plaintext
data pulses may escape on power lines, or as a faint background
signal to encrypted data, or in any other unexpected
<A HREF = "#Electronic">electronic</A> way.
<P>Some amount of current change seems inevitable when switching
occurs, and modern digital computation is based on such switching.
But the amount of electromagnetic radiation emitted depends upon the
amount of current switched, the length of the conductor, and the
speed of the switching (that is, dI/dt, or the rate-of-change
in current).
In normal processing the amount of radiated energy is very small, but
the value can be much larger when fast power drivers are used to send
signals across cables of some length. This typically results in
broadband noise which can be sensed with a shortwave receiver, a
television, or an AM portable radio. Such receivers can be used to
monitor attempts at improving the shielding.
<P>Ideally, equipment would be fully enclosed in an electrically
unbroken conducting surface. In practice, the conductive enclosure
may be sheet metal or screening, with holes for shielded cables.
Shielding occurs not primarily from metal <I>per se,</I> but instead
from the flow of electrical current in that metal.
When an electromagnetic wave passes through a conductive surface, it
induces a current, and that current change creates a similar but
<I>opposing</I> electromagnetic wave which nearly <I>cancels</I>
the original. The metallic surface must conduct in all directions
to properly neutralize waves at every location and from every
direction.
<P>Stock
<A HREF = "#Computer">computer</A> enclosures often have huge unshielded
openings which are hidden by a plastic cover. These should be
covered with metal plates or screening, making sure that good
electrical contact occurs at all places around the edges. Note that
assuring good electrical connections can be difficult with aluminum,
which naturally forms a thin but hard and non-conductive surface
oxide. It is important to actually monitor
emission levels with receivers both before and after any change, and
extreme success can be very difficult. We can at least make sure
that the shielding is tight (that it electrically conducts to all
the surrounding metal), that it is as complete as possible, and that
external cables are effectively shielded.
<P>Cable shielding extends the conductive envelope around signal
wires and into the envelope surrounding the equipment the wire goes
to. Any electromagnetic radiation from within a shield will tend to
produce an opposing current in the shield conductor which will
"cancel" the original radiation. But if a cable shield is <I>not</I>
connected at <I>both</I> ends, no opposing current can flow, and no
electromagnetic shielding will occur, despite having a metallic
"shield" around the cable. It is thus necessary to assure that each
external cable <I>has</I> a shield, and that the shield is <I>connected</I>
to a conductive enclosure at <I>both</I> ends. (Note that some
equipment may have an isolating capacitor between the shield and
chassis ground to minimize "ground loop" effects when the equipment
at each end of the cable connects to different
<A HREF = "#AC">AC</A> sockets.) When
shielding is impossible, it can be useful to place ferrite beads or
rings around cables to promote a balanced and therefore essentially
non-radiating signal flow.
<P>Perhaps the most worrisome emitter on a personal computer is the
display cathode ray tube (CRT). Here we have a bundle of three
electron beams, serially modulated, with reasonable
current, switching quickly, and repeatedly tracing the exact same
picture typically 60 times a second. This produces a recognizable
substantial signal, and the repetition allows each display point to be
compared across many different receptions, thus removing noise and
increasing the effective range of the unintended communication. All
things being equal, a liquid-crystal display should radiate a far
smaller and also more-complex signal than a desktop CRT.
<A NAME = "Transformer"></A>
<P><DT><B>Transformer</B>
<DD>A passive electrical
<A HREF = "#Component">component</A>
composed of magnetically-coupled coils of wire.
When
<HREF = "#AC">AC</A> flows through one coil or "primary," it
creates a changing
<A HREF = "#MagneticField">magnetic field</A>
which induces power in another coil. A transformer
thus <I>isolates</I> power or signal, and also can change the
<A HREF = "#Voltage">voltage</A>-to-<A HREF = "#Current">current</A>
ratio, for example to "step down" line voltage for low-voltage use,
or to "step up" low voltages for high-voltage devices (such as
tubes or plasma devices).
<A NAME = "Transistor"></A>
<P><DT><B>Transistor</B>
<DD>An active
<A HREF = "#Semiconductor">semiconductor</A>
<A HREF = "#Component">component</A> which performs
<A HREF = "#Analog">analog</A>
<A HREF = "#Amplifier">amplification</A>.
<P>Originally, a
bipolar
version with three terminals: Emitter (e), Collector (c), and
Base (b).
<A HREF = "#Current">Current</A> flow through the base-emitter
junction (I<SUB>be</SUB>) is amplified by the current
<A HREF = "#Gain">gain</A> or beta (B) of the device in allowing
current to flow through the collector-base junction and on through
the emitter (I<SUB>ce</SUB>).
<P>In a sense, a bipolar transistor consists of two back-to-back
<A HREF = "#Diode">diodes</A>: the base-collector junction (operated
in reverse bias) and the base-emitter junction (operated in forward
bias) which influence each other. Current through the base-emitter
junction releases either electrons or "holes" which are then drawn
to the collector junction by the higher potential there, thus
increasing collector current. The current ratio between the base
input and the collector output is amplification.
<P>Field-Effect Transistors (FET's, as in MOSFET, etc.) have an
extremely high input impedence, taking essentially no input current,
and may be more easily fabricated in integrated circuits than
bipolars. In an FET, Drain (d) and Source (s) contacts connect to
a "doped" semiconductor channel. Extremely close to that channel,
but still insulated from it, is a conductive area connected to a
Gate (g) contact. Voltage on the gate creates an electrostatic field
which interacts with current flowing in the drain-source channel, and
can act to turn that current ON or OFF, depending on channel material
(P or N), doping (enhancement or depletion), and gate polarity.
Sometimes the drain and source terminals are interchangeable, and
sometimes the source is connected to the substrate. Instead of an
insulated gate, we can also have a reverse-biased diode junction,
as in a JFET.
<P>N-channel FET's generally work better than p-channel devices.
JFET's can only have "depletion mode," which means that, with the
gate grounded to the source, they are ON. N-channel JFET devices go
OFF with a negative voltage on the gate. Normally, MOSFET devices
are "enhancement mode" and are OFF with their gate grounded.
N-channel MOSFET devices go ON with a positive voltage (0.5 to 5v)
on the gate. Depletion mode n-channel MOSFET devices are possible,
but not common.
<A NAME = "Transposition"></A>
<P><DT><B>Transposition</B>
<DD>The exchange in position of two elements. The most
primitive possible
<A HREF = "#Permutation">permutation</A> or re-ordering of elements.
Any possible permutation can be constructed from a sequence of
transpositions.
<A NAME = "TrapDoor"></A>
<P><DT><B>Trap Door</B>
<DD>A
<A HREF = "#Cipher">cipher</A> design feature, presumably planned,
which allows the apparent strength of the design to be easily
avoided by those who know the trick.
Similar to <A HREF = "#BackDoor">back door</A>.
<A NAME = "TripleDES"></A>
<P><DT><B>Triple DES</B>
<DD>The particular
<A HREF = "#BlockCipher">block cipher</A> which is the U.S. Data
Encryption Standard or
<A HREF = "#DES">DES</A>, performed three times, with two or
three different keys.
<A NAME = "TrulyRandom"></A>
<P><DT><B>Truly Random</B>
<DD>A random value or sequence derived from a physical source.
Also called <A HREF = "#ReallyRandom">really random</A> and
<A HREF = "#PhysicallyRandom">physically random</A>.
<A NAME = "Trust"></A>
<P><DT><B>Trust</B>
<DD>The assumption of a particular outcome in a dependence upon
someone else. Trust is the basis for communications secrecy:
While
<A HREF = "#Secrecy">secrecy</A> can involve keeping one's own
secrets, <I>communications secrecy</I> almost inevitably involves
at least a second party. We thus necessarily "trust" that party
with the secret itself, to say nothing of
<A HREF = "#Cryptography">cryptographic</A>
<A HREF = "#Key">keys</A>. It makes little sense to talk about
secrecy in the absence of trust.
<P>In a true
<A HREF = "#Security">security</A> sense, it is impossible to fully
trust <I>anyone:</I> Everyone has their weaknesses, their oversights,
their own agendas. But normally "trust" involves some form of
<I>commitment</I> by the other party to keep any secrets that occur.
Normally the other party is constrained in some way, either by their
own self-interest, or by contractual, legal, or other consequences of
the failure of trust. The idea that there can be any realistic
trust between two people who have never met, are not related, have
no close friends in common, are not in the same employ, and are not
contractually bound, can be a very dangerous delusion. It is
important to recognize that no trust is without limit, and those
limits are precisely the commitment of the other party, bolstered
by the consequences of betrayal. Trust without consequences is
necessarily a very weak trust.
<A NAME = "TruthTable"></A>
<P><DT><B>Truth Table</B>
<DD>Typically, a
<A HREF = "#BooleanFunction">Boolean function</A> expressed as the
table of the value it will produce for each possible combination of
input values.
<A NAME = "TypeIError"></A>
<P><DT><B>Type I Error</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the rejection of a true
<A HREF = "#NullHypothesis">null hypothesis</A>.
<A NAME = "TypeIIError"></A>
<P><DT><B>Type II Error</B>
<DD>In
<A HREF = "#Statistics">statistics</A>, the acceptance of a false
<A HREF = "#NullHypothesis">null hypothesis</A>.
<A NAME = "Unary"></A>
<P><DT><HR><P><B>Unary</B>
<DD>From the Latin for "one kind." Sometimes used to describe
functions with a single argument, such as "the unary -" (the
minus-sign), as opposed to subtraction, which presumably would be
"binary," and <I>that</I> could get very confusing very fast. Thus,
<A HREF = "#Monadic">monadic</A> may be a better choice. Also see:
<A HREF = "#Binary">binary</A> and
<A HREF = "#Dyadic">dyadic</A>.
<A NAME = "UnexpectedDistance"></A>
<P><DT><B>Unexpected Distance</B>
<DD>The values computed by a
<A HREF = "#FastWalshTransform">fast Walsh transform</A>
when calculating
<A HREF = "#BooleanFunctionNonlinearity">Boolean function nonlinearity</A>
as often used in
<A HREF = "#S-Box">S-box</A> analysis.
<P>Given any two
<A HREF = "#Random">random</A>
<A HREF = "#Boolean">Boolean</A> sequences of the same length, we
"expect" to find about half of the bits the same, and about half
different. This means that the <I>expected</I>
<A HREF = "#HammingDistance">Hamming distance</A> between two
sequences is half their length.
<P>With respect to Boolean function nonlinearity, the expected
distance is not only what we <I>expect,</I> it is also
<I>the best we can possibly do,</I> because each
<A HREF = "#AffineBooleanFunction">affine Boolean function</A>
comes in both complemented and uncomplemented versions. So if
<I>more</I> than half the bits differ between a random function and
one version, then <I>less</I> than half must differ to the other
version. This makes the expected distance the ideal reference
point for nonlinearity.
<P>Since the FWT automatically produces the difference between the
expected distance and the distance to each possible affine Boolean
function (of the given length), I call this the <I>un</I>expected
distance.
Each term is positive or negative, depending on which version is
more correlated to the given sequence, and the absolute value of
this is a measure of
<A HREF = "#Linear">linearity</A>. But since we generally want
<I>non</I>linearity, we typically subtract the unexpected value from
half the length of the sequence.
<A NAME = "UnicityDistance"></A>
<P><DT><B>Unicity Distance</B>
<DD>The amount of
<A HREF = "#Ciphertext">ciphertext</A> needed to uniquely identify
the correct
<A HREF = "#Key">key</A> and its associated
<A HREF = "#Plaintext">plaintext</A> (assuming a ciphertext-only
<A HREF = "#Attack">attack</A> and natural language plaintext).
With less ciphertext than the unicity distance, multiple keys may
produce decipherings which are each plausible messages, although
only one of these would be the correct solution.
As we increase the amount of ciphertext, many formerly-plausable
keys are eliminated, because the plaintext they produce becomes
identifiably different from the structure and redundancy we expect
in a natural language.
<BLOCKQUOTE>
<P>"If a secrecy system with a finite key is used, and <I>N</I>
letters of cryptogram intercepted, there will be, for the enemy,
a certain set of messages with certain probabilities, that this
cryptogram could represent. As <I>N</I> increases the field
usually narrows down until eventually there is a unique
'solution' to the cryptogram; one message with probability
essentially unity while all others are practically zero. A
quantity <I>H(N)</I> is defined, called the equivocation, which
measures in a statistical way how near the average cryptogram
of <I>N</I> letters is to a unique solution; that is, how
uncertain the enemy is of the original message after intercepting
a cryptogram of <I>N</I> letters." [p.659]
<P>"This gives a way of calculating approximately how much
intercepted material is required to obtain a solution to the
secrecy system. It appears <NOBR>. . .</NOBR> that with
ordinary languages and the usual types of ciphers (not codes)
this 'unicity distance'
is approximately <I>H(K)/D.</I> Here <I>H(K)</I> is a number
measuring the 'size' of the key space. If all keys are
<I>a priori</I> equally likely, <I>H(K)</I> is the logarithm
of the number of possible keys. <I>D</I> is the redundancy of
the <NOBR>language . . . ."</NOBR>
"In simple substitution with a random key
<I>H(K)</I> is <NOBR>log<SUB>10</SUB> 26!</NOBR> or about 20 and
<I>D</I> (in decimal digits per letter) is about .7 for English.
Thus unicity occurs at about 30 letters." [p.660]
</BLOCKQUOTE>
<P>Shannon, C. 1949. Communication Theory of Secrecy Systems.
<I>Bell System Technical Journal.</I>
28: 656-715.
<A NAME = "UniformDistribution"></A>
<P><DT><B>Uniform Distribution</B>
<DD>A probability
<A HREF = "#Distribution">distribution</A> in which each possible
value is equally likely. Also a "flat" or "even" distribution.
<P>A uniform distribution is the most important distribution in
cryptography. For example, a cryptographer strives to make every
possible
<A HREF = "#Plaintext">plaintext</A> an equally likely
interpretation of any
<A HREF = "#Ciphertext">ciphertext</A> (see
<A HREF = "#IdealSecrecy">ideal secrecy</A>).
A cryptographer also strives to make every possible
<A HREF = "#Key">key</A> equally likely, given any amount of
<A HREF = "#KnownPlaintextAttack">known plaintext</A>.
<P>On the other hand, a uniform distribution with respect to one
quality is not necessarily uniform with respect to another.
For example, while keyed
<A HREF = "#Shuffle">shuffling</A> can provably produce any
possible
<A HREF = "#Permutation">permutation</A> with equal probability
(a uniform distribution of different
<A HREF = "#SubstitutionTable">tables</A>), those tables will have a
<A HREF = "#BooleanFunctionNonlinearity">Boolean function nonlinearity</A>
distribution which is decidedly not uniform. And we might well
expect a <I>different</I> non-uniform distribution for every
different quality we measure.
<A NAME = "VariableSizeBlockCipher"></A>
<P><DT><HR><P><B>Variable Size Block Cipher</B>
<DD>The ciphering concept described in U.S. Patent 5,727,062 (see the
<A HREF = "http://www.io.com/~ritter/#VSBCTech">VSBC articles</A> on the
<A HREF = "http://www.io.com/~ritter/">Ciphers By Ritter</A> page).
<P>A
<A HREF = "#BlockCipher">block cipher</A> which supports ciphering
in blocks of dynamically variable size. The
<A HREF = "#Block">block</A> size may vary only in steps
of some element size (for example, a
<A HREF = "#Byte">byte</A>), but blocks could be arbitrarily large.
<P>Three characteristics distinguish a true variable size block
cipher from designs which are merely imprecise about the size of
block or element they support or the degree to which they support
<A HREF = "#OverallDiffusion">overall diffusion</A>:
<OL>
<P><LI>A variable size block cipher is indefinitely extensible and
has no theoretical block size limitation;
<P><LI>A variable size block cipher can approach overall diffusion,
such that each
<A HREF = "#Bit">bit</A> in the output block is a function of
every bit in the input block; and
<P><LI>A true variable size block cipher does not require
additional steps (<A HREF = "#Round">rounds</A>) or
<A HREF = "#Layer">layers</A> to approach overall diffusion as
the block size is expanded.
</OL>
<P>Also see
<A HREF = "#DynamicSubstitutionCombiner">Dynamic Substitution Combiner</A> and
<A HREF = "#BalancedBlockMixing">Balanced Block Mixing</A>.
<A NAME = "Voltage"></A>
<P><DT><B>Voltage</B>
<DD>The measure of electron "potential" in
volts.
Voltage is analogous to water <I>pressure,</I> as opposed to
<I>flow</I> or
<A HREF = "#Current">current</A>.
<A NAME = "WalshFunctions"></A>
<P><DT><HR><P><B>Walsh Functions</B>
<DD>Walsh Functions are essentially the
<A HREF = "#AffineBooleanFunction">affine Boolean functions</A>,
although they are often represented with values {+1,-1).
There are three different canonical orderings for these functions.
The worth of these functions largely rests on their being a
complete set of orthogonal functions. This allows any function
to be represented as a correlation to each of the Walsh functions.
This is a transform into an alternate basis which may be more
useful for analysis or construction.
<P>Also see:
<A HREF = "#FastWalshTransform">Fast Walsh-Hadamard Transform</A>.
<A NAME = "Weight"></A>
<P><DT><B>Weight</B>
<DD>The weight of
<A HREF = "#BooleanFunction">Boolean Function</A> <I>f</I>
is the number of 1's in the
<A HREF = "#TruthTable">truth table</A> of <I>f</I>.
<A NAME = "Whitening"></A>
<P><DT><B>Whitening</B>
<DD>An overly-cute description of making a signal or data more like
<A HREF = "#WhiteNoise">white noise</A>, with an equal amount of
energy in each frequency. To make data more
<A HREF = "#Random">random</A>-like.
<A NAME = "WhiteNoise"></A>
<P><DT><B>White Noise</B>
<DD>A
<A HREF = "#Random">random</A>-like signal with a flat
<A HREF = "#Frequency">frequency</A> spectrum, in which each
frequency has the same magnitude. As opposed to
<A HREF = "#PinkNoise">pink noise</A>, in which the frequency
spectrum drops off with frequency. White noise is analogous to
white light, which contains every possible color.
<P>White noise is normally described as a relative power density in
<A HREF = "#Voltage">volts</A> squared per hertz.
White noise power varies directly with bandwidth, so white noise
would have twice as much power in the next higher octave as in the
current one. The introduction of a white noise audio signal can
destroy high-frequency loudspeakers.
<A NAME = "Wire"></A>
<P><DT><B>Wire</B>
<DD>A thin, long
<A HREF = "#Conductor">conductor</A>, often considered "ideally
conductive" compared to other parts of a
<A HREF = "#Circuit">circuit</A>.
<A NAME = "XOR"></A>
<P><DT><HR><P><B>XOR</B>
<DD><A HREF = "#ExclusiveOR">Exclusive-OR</A>.
A Boolean
<A HREF = "#LogicFunction">logic function</A> which is also
<A HREF = "#Mod2">mod 2</A> addition.
</DL>
<P><HR>
<I><A HREF = "AUTHOR.HTM">Terry Ritter</A>, his
<A HREF = "AUTHOR.HTM#Addr">current address</A>, and his
<A HREF = "CRYPHTML.HTM">top page</A>.</I>
<P>
</BODY>
</HTML>