🎚️

Meet 1 - Blog

Created
Tags
Number Theory is the Queen of Mathematics. - Carl Friedrich Gauss

Introduction


Modular Arithmetic was debveloped by Carl Friedrich gauss in 1801. Since then, it has been used extensively to study properties of numbers and classes of numbers

To mention its several applications in Computer Science, Modular Arithmetic allows us to easily create groups, rings and fields which are fundamental building blocks of most modern public-key cryptosystems.

In this and the next few scheduled meets, we are, however, going to talk about applications of number theory and modular arithmetic with respect to competitive programming.

Divisibility


Before directly talking about the topic of Modular Arithmetic let us first revise the concepts on divisibility and the ideas revolving gcd\gcd  and lcm\text{lcm}.

Greatest Common Divisor

Problem: Given two numbers aa and bb find the greatest common divisor of both.

gcd(a,b)=maxk=1 : ka k bk\gcd(a, b) = \max_ {k = 1 \dots \infty ~ : ~ k \mid a ~ \wedge k ~ \mid b} k

This brings us to one of the oldest algorithms known to mankind. Euclidean algorithm to find gcd\gcd of two numbers.

From our knowledge of divisibility let us formulate the following:

a=q0b+r0b=q1r0+r1r0=q2r1+r2rk1=qk+1rk+0a = q_0b + r_0\\ b = q_1r_0 + r_1\\ r_0 = q_2r_1 + r_2\\ \ldots\\ r_{k-1} = q_{k+1}r_k + 0

Then, gcd(a,b)=gcd(b,r0)=gcd(r0,r1)==gcd(rk1,rk)=rk\gcd(a, b) = \gcd(b, r_0) = \gcd(r_0, r_1) =\dots = \gcd(r_{k-1}, r_k) = r_k.

gcd(a,b)={a,if b=0gcd(b,amodb),otherwise.\therefore \gcd(a, b) = \begin{cases}a,&\text{if }b = 0 \\ \gcd(b, a \bmod b),&\text{otherwise.}\end{cases}
// Make sure that a > b in here.
int gcd(int a, int b)
{
		if (b == 0) return a;
		return gcd(a, a%b);
}

Extended GCD Algorithm

Given any (a,b)(a, b) we can always represent the following. To prove this one can use a basic proof by contradiction.

ax+by=gcd(a,b)ax + by = \gcd(a, b)

Problem: Find these coefficients.

int gcd(int a, int b, int& x, int& y) 
{
    if (b == 0) 
		{
        x = 1;
        y = 0;
        return a;
    }
    
		int x1, y1;
    int d = gcd(b, a % b, x1, y1);
    x = y1, y = x1 - y1 * (a / b);
    
		return d;
}

Terminology and Notation


n mod mn\ \text{mod}\ m is defined as n nmmn\ - \lfloor \frac{n}{m} \rfloor*m, where x\lfloor x \rfloor is the largest integer that doesn't exceed xx (This should always produce an integer between 00 and m1m-1 inclusive).

In mathematical terms, nx (mod m)n \equiv x\ (\text{mod}\ m) if  kN0\exists\ k \in \N \cup {0} such that nx=km|n - x| = km.

Also, if we have two expressions ff and gg then,

fg (mod m)    f mod m=g mod mf \equiv g\ (\text{mod}\ m) \implies f \ \text{mod}\ m = g \ \text{mod}\ m

In C/C++ % is the modulus operator.

For example 5 % 3 = 2, 10 % 2 = 0, -3 % 2 = -1 and 5 % -3 = 1.

Also, x % 0 is not defined for obvious reasons.

Precedence order:

  1. % / *
  1. + -

Therefore, a + b % m is not equal to (a + b) % m.

Modular Arithmetic


Basic Properties

Let's talk about some basic properties of the modular operation.

1. Addition

Proof

Let, a=mq1+r1    a mod m=r1a = mq_1 + r_1 \implies a\ \text{mod}\ m = r_1

and b=mq2+r2    b mod m=r2b = mq_2 + r_2 \implies b\ \text{mod}\ m = r_2

Thus, LHS=(r1+r2) mod mLHS = (r_1 + r_2)\ \text{mod}\ m, and on the other hane we have

2. Subtraction

Proof

Let, a=mq1+r1    a mod m=r1a = mq_1 + r_1 \implies a\ \text{mod}\ m = r_1

and b=mq2+r2    b mod m=r2b = mq_2 + r_2 \implies b\ \text{mod}\ m = r_2

Thus, LHS=(r1r2) mod mLHS = (r_1 - r_2)\ \text{mod}\ m, and on the other hane we have

3. Multiplication

((a mod m)×(b mod m)) mod m=(a×b) mod m((a\ \text{mod}\ m) \times (b \ \text{mod}\ m))\ mod\ m = (a\times b)\ \text{mod}\ m

Proof

Let, a=mq1+r1    a mod m=r1a = mq_1 + r_1 \implies a\ \text{mod}\ m = r_1

and b=mq2+r2    b mod m=r2b = mq_2 + r_2 \implies b\ \text{mod}\ m = r_2

Thus, LHS=(r1×r2) mod mLHS = (r_1 \times r_2)\ \text{mod}\ m, and on the other hane we have

RHS=a×b=m2q1q2+mq1r2+mq2r1+r1r2    (a×b) mod m=(r1×r2) mod mRHS = a \times b = m^2q_1q_2 + mq_1r_2 + mq_2r_1 + r_1r_2\\ \implies (a\times b)\ \text{mod}\ m = (r_1\times r_2)\ \text{mod}\ m

Example

You can apply modulus operator to any ugly looking equation as many times as you want and the final result would not get affected.

(152+133×41×59×6+105×12387) % 3 ?(152 + 133\times41\times59\times6 + 105\times123 - 87)\ \%\ 3\ \text{?}

Well this clearly is equivalent to the following.

=2+1×2×2×0+0×00=2= 2 + 1\times2\times2\times0 + 0\times0 -0 = 2

4. Division

This is slightly more complicated, and requires a concept called the "modular multiplicative inverse".

The modular multiplicative inverse of a number aa is the number a1a^{-1}  s.t.  a×a1 mod m=1a \times a^{-1}\ \text{mod}\ m = 1. You may notice that this is similar to the concept of a reciprocal, but here we don't want a fraction; we want an integer, specifically an integer between 0 and m1m-1 inclusive.

We should note that the modular inverse does not always exist. For example, let m=4,a=2m = 4, a =2 . By checking all possible values modulo mm is should become clear that we cannot find a1a^{-1} satisfying the above equation. It can be proven that the modular inverse exists if and only if a and ma\ \text{and}\ m are relatively prime, i.e. gcd(a, m) = 1i.e.\ \text{gcd(a, m) = 1}.

But how do you actually find such a number? Bruteforcing all numbers to a prime number close to a billion will usually cause you to exceed the time limit. There are two faster ways to calculate the inverse: the extended GCD algorithm, and Fermat's little theorem.

ax+by=gcd(a,b)=1    ax+by1 (mod b)    ax1 (mod b)ax+by = \gcd(a, b) = 1 \implies ax + by \equiv 1 \ (\text{mod }b) \implies ax \equiv 1\ (\text{mod } b)

a×a11(modm)a\times a^{-1}\,\equiv\,1\,\left(mod\,m\right)

a×a1+m×y=1a\times a^{-1} + m\times y = 1

a×x+b×y=gcd(a,b)a\times x + b\times y = gcd(a, b)

Further Properties

Try to prove these on your own!

Theorems and Methods


Chinese Remainder Theorem

xa1modn1xa2modn2xa3modn3....xarmodnrx \equiv a_1 \mod n_1\\ x \equiv a_2 \mod n_2\\ x \equiv a_3 \mod n_3\\ ....\\ x \equiv a_r \mod n_r\\

Let, N=n1...nrN = n_1...n_r and Ni=NniN_i = \frac{N}{n_i} then, the solution for x mod n1...nrx\text{ mod } n_1...n_r shall be

x=aiNixix = \sum a_iN_ix_i

where Nixi1modniN_ix_i \equiv 1 \mod n_i.

Fermat's Theorem

According to fermat's Theorem we have the following which holds true if and only if gcd(a,p)=1\gcd(a, p) = 1.

ap11modpa^{p-1} \equiv 1 \mod p

Proof Outline

We have a.2a.3a...(p1)a1.2...(p1)modp a.2a.3a...(p-1)a \equiv 1.2...(p-1) \mod p.

From the equation above the Fermat's theorem, called formally as Fermat's Little Theorem in contrast with his last theorem, can be easily derived and hence proven.

Wilson's Theorem

(p1)!1modp(p-1)! \equiv -1 \mod p

Proof Outline

We know that (p1)!1.2.3...(p1) mod p(p-1)! \equiv 1.2.3...(p-1)\ \text{mod }p

    (p1)!1.(p1).(2.3...(p2)) mod p    (p1)!(p1) mod p\implies(p-1)! \equiv 1.(p-1).(2.3...(p-2))\ \text{mod}\ p \implies (p-1)! \equiv (p-1)\ \text{mod}\ p

Backstory: Probably Leibnitz was the first person to find this shit but he never published. I am not quite sure who finally wrote the formal proof of this theorem. Probably, Euler.

More Theorems

Binary Exponentiation

Binary exponentiation (also known as exponentiation by squaring) is a trick which allows to calculate an using only O(logN)O(logN) multiplications (instead of O(N)O(N)  multiplications required by the naive approach).

It also has important applications in many tasks unrelated to arithmetic, since it can be used with any operations that have the property of associativity:

Raising aa to the power of nn is expressed naively as multiplication by aa done n1n-1 times: an=a×a×a ...aa^n = a\times a\times a\ ...a. However, this approach is not practical for large a or n.

The idea of binary exponentiation is, that we split the work using the binary representation of the exponent.

Let's write n in base 2, for example:

Since the number n has exactly logN+1\lfloor logN\rfloor + 1  digits in base 2, we only need to perform O(logN)O(logN)multiplications, if we know the powers a1, a2, a4,...alog(N)a^1,\ a^2,\ a^4, ...a^{log(N)}.

So we only need to know a fast way to compute those. Luckily this is very easy, since an element in the sequence is just the square of the previous element.

{31=332=(31)2=32=934=(32)2=92=8138=(34)2=812=6561\begin{cases} 3^1 &= 3 \\ 3^2 &= (3^1)^2 = 3^2 = 9 \\ 3^4 &= (3^2)^2 = 9^2 = 81 \\ 3^8 &= (3^4)^2 = 81^2 = 6561 \end{cases}

So to get the final answer for 3133^{13}, we only need to multiply three of them (skipping 323^2 because the corresponding bit in n is not set): 3133^{13}= 6561⋅81⋅3 = 1594323

The final complexity of this algorithm is O(logn): we have to compute logn powers of a, and then have to do at most logn multiplications to get the final answer from them.

The following recursive approach expresses the same idea:

an={1if n==0(an2)2if n>0 and n even(an12)2aif n>0 and n odda^n = \begin{cases} 1 &\text{if } n == 0 \\ \left(a^{\frac{n}{2}}\right)^2 &\text{if } n > 0 \text{ and } n \text{ even}\\ \left(a^{\frac{n - 1}{2}}\right)^2 \cdot a &\text{if } n > 0 \text{ and } n \text{ odd}\\ \end{cases}

Implementation

// binpow(a, b) => a^b
long long binpow(long long a, long long b) 
{
    if (b == 0)
        return 1;
    long long res = binpow(a, b / 2);
    if (b % 2)
        return res * res * a;
    else
        return res * res;
}


long long binpow(long long a, long long b) {
    long long res = 1;
    while (b > 0) {
        if (b & 1)
            res = res * a;
        a = a * a;
        b >>= 1;
    }
    return res;
}

Now the Coolest Question!

Problem: Find the nthn^{th} fibonacci number.

[FnFn+1]=[F0F1]Mn\begin{bmatrix}F_n & F_{n+1} \cr\end{bmatrix} = \begin{bmatrix}F_0 & F_1 \cr\end{bmatrix} \cdot M^n

where we have the matrix MM as

M=[0111]M = \begin{bmatrix}0 & 1 \cr 1 & 1 \cr\end{bmatrix}