# Meet 1 - Blog

Created | |
---|---|

Tags |

Number Theory is the Queen of Mathematics.- Carl Friedrich Gauss

# Introduction

Modular Arithmetic was debveloped by Carl Friedrich gauss in 1801. Since then, it has been used extensively to study properties of numbers and classes of numbers

To mention its several applications in Computer Science, Modular Arithmetic allows us to easily create groups, rings and fields which are fundamental building blocks of most modern public-key cryptosystems.

In this and the next few scheduled meets, we are, however, going to talk about applications of number theory and modular arithmetic with respect to competitive programming.

# Divisibility

Before directly talking about the topic of Modular Arithmetic let us first revise the concepts on divisibility and the ideas revolving $\gcd$ and $\text{lcm}$.

- $a=bq+r$ for some $(a, b)\ \exists$ unique $(q, r)$ where $0 < b \leq a$

## Greatest Common Divisor

Problem: Given two numbers $a$ and $b$ find the greatest common divisor of both.

This brings us to one of the oldest algorithms known to mankind. Euclidean algorithm to find $\gcd$ of two numbers.

From our knowledge of divisibility let us formulate the following:

Then, $\gcd(a, b) = \gcd(b, r_0) = \gcd(r_0, r_1) =\dots = \gcd(r_{k-1}, r_k) = r_k$.

```
// Make sure that a > b in here.
int gcd(int a, int b)
{
if (b == 0) return a;
return gcd(a, a%b);
}
```

## Extended GCD Algorithm

Given any $(a, b)$ we can always represent the following. To prove this one can use a basic proof by contradiction.

Problem: Find these coefficients.

```
int gcd(int a, int b, int& x, int& y)
{
if (b == 0)
{
x = 1;
y = 0;
return a;
}
int x1, y1;
int d = gcd(b, a % b, x1, y1);
x = y1, y = x1 - y1 * (a / b);
return d;
}
```

# Terminology and Notation

*$n\ \text{mod}\ m$** *is defined as $n\ - \lfloor \frac{n}{m} \rfloor*m$, where $\lfloor x \rfloor$ is the largest integer that doesn't exceed $x$ (This should always produce an integer between $0$ and $m-1$ inclusive).

In mathematical terms, $n \equiv x\ (\text{mod}\ m)$ if $\exists\ k \in \N \cup {0}$ such that $|n - x| = km$.

Also, if we have two expressions $f$ and $g$ then,

In C/C++ `%`

is the modulus operator.

For example `5 % 3 = 2`

, `10 % 2 = 0`

, `-3 % 2 = -1`

and `5 % -3 = 1`

.

Also, `x % 0`

is not defined for obvious reasons.

Precedence order:

`%`

`/`

`*`

`+`

`-`

Therefore, `a + b % m`

is not equal to `(a + b) % m`

.

# Modular Arithmetic

## Basic Properties

Let's talk about some basic properties of the modular operation.

- $a \equiv a \ (\text{mod}\ m)$

- $a \equiv b\ (\text{mod}\ m) \iff b \equiv a\ (\text{mod}\ m)$

- If $a \equiv b\ (\text{mod}\ m)$ and $b \equiv c \ (\text{mod}\ m)$ then $a \equiv c\ (\text{mod}\ m)$

## 1. Addition

### Proof

Let, $a = mq_1 + r_1 \implies a\ \text{mod}\ m = r_1$

and $b = mq_2 + r_2 \implies b\ \text{mod}\ m = r_2$

Thus, $LHS = (r_1 + r_2)\ \text{mod}\ m$, and on the other hane we have

## 2. Subtraction

### Proof

Let, $a = mq_1 + r_1 \implies a\ \text{mod}\ m = r_1$

and $b = mq_2 + r_2 \implies b\ \text{mod}\ m = r_2$

Thus, $LHS = (r_1 - r_2)\ \text{mod}\ m$, and on the other hane we have

## 3. Multiplication

### Proof

Let, $a = mq_1 + r_1 \implies a\ \text{mod}\ m = r_1$

and $b = mq_2 + r_2 \implies b\ \text{mod}\ m = r_2$

Thus, $LHS = (r_1 \times r_2)\ \text{mod}\ m$, and on the other hane we have

## Example

You can apply modulus operator to any ugly looking equation as many times as you want and the final result would not get affected.

$(152 + 133\times41\times59\times6 + 105\times123 - 87)\ \%\ 3\ \text{?}$

Well this clearly is equivalent to the following.

## 4. Division

This is slightly more complicated, and requires a concept called the "modular multiplicative inverse".

The modular multiplicative inverse of a number $a$ is the number $a^{-1}$ s.t. $a \times a^{-1}\ \text{mod}\ m = 1$. You may notice that this is similar to the concept of a reciprocal, but here we don't want a fraction; we want an integer, specifically an integer between 0 and $m-1$ inclusive.

We should note that the modular inverse does not always exist. For example, let $m = 4, a =2$. By checking all possible values modulo $m$ is should become clear that we cannot find $a^{-1}$ satisfying the above equation. It can be proven that the modular inverse exists if and only if $a\ \text{and}\ m$ are relatively prime, $i.e.\ \text{gcd(a, m) = 1}$.

But how do you actually find such a number? Bruteforcing all numbers to a prime number close to a billion will usually cause you to exceed the time limit. There are two faster ways to calculate the inverse: the extended GCD algorithm, and Fermat's little theorem.

- $a^{-1} \ \text{using Fermat's Theorem}$
*Fermat's Result :**$a^{p-1} \equiv 1 \mod p$**,*where $p$ is prime.$a^{p-1} \equiv 1\ \text{mod}\ p$

$a\times a^{p-2} \equiv 1\ \text{mod}\ p$

$\therefore \text{We can see that }a^{p-2} \equiv a^{-1} \ \text{mod}\ p$

- Extended GCD algorithm is explained below.

$a\times a^{-1}\,\equiv\,1\,\left(mod\,m\right)$

$a\times a^{-1} + m\times y = 1$

$a\times x + b\times y = gcd(a, b)$

## Further Properties

- $ca \equiv cb\ (\text{mod}\ n) \implies a \equiv c\ (\text{mod}\ \frac{n}{gcd(c,n)})$

- $ax \equiv 1\ (\text{mod}\ n)$ has a unique solution modulo $n$ if and only if $gcd(a, n) = 1$

- $ax \equiv b\ (\text{mod}\ n) \implies ax - ny = b$ and thus will have solutions, if and only if $gcd(a,n) = b$

- $ax+by\equiv r\ (mod\ n)$ and $cx+dy \equiv s\ (\text{mod}\ n)$ has a unique solution modulo $x$, if and only if $gcd(ad-bc, n)=1$

Try to prove these on your own!

# Theorems and Methods

## Chinese Remainder Theorem

Let, $N = n_1...n_r$ and $N_i = \frac{N}{n_i}$ then, the solution for $x\text{ mod } n_1...n_r$ shall be

where $N_ix_i \equiv 1 \mod n_i$.

## Fermat's Theorem

According to fermat's Theorem we have the following which holds true if and only if $\gcd(a, p) = 1$.

**Proof Outline**

We have $a.2a.3a...(p-1)a \equiv 1.2...(p-1) \mod p$.

From the equation above the Fermat's theorem, called formally as Fermat's Little Theorem in contrast with his last theorem, can be easily derived and hence proven.

## Wilson's Theorem

**Proof Outline**

We know that $(p-1)! \equiv 1.2.3...(p-1)\ \text{mod }p$

Backstory: Probably Leibnitz was the first person to find this shit but he never published. I am not quite sure who finally wrote the formal proof of this theorem. Probably, Euler.

## More Theorems

- If $p, q$ are two primes such that $a^p \equiv 1 \ (\text{mod}\ q)$ and $a^q \equiv 1 \ (\text{mod}\ p)$ then,

- $x^2 + 1 \equiv 0\text{ mod } p$ has a solution if and only if $p \equiv 1 \ (\text{mod}\ 4)$ and $p$ is an odd prime.

## Binary Exponentiation

Binary exponentiation (also known as exponentiation by squaring) is a trick which allows to calculate an using only $O(logN)$ multiplications (instead of $O(N)$ multiplications required by the naive approach).

It also has important applications in many tasks unrelated to arithmetic, since it can be used with any operations that have the property of associativity:

Raising $a$ to the power of $n$ is expressed naively as multiplication by $a$ done $n-1$ times: $a^n = a\times a\times a\ ...a$. However, this approach is not practical for large a or n.

The idea of binary exponentiation is, that we split the work using the binary representation of the exponent.

Let's write n in base 2, for example:

Since the number n has exactly $\lfloor logN\rfloor + 1$ digits in base 2, we only need to perform $O(logN)$multiplications, if we know the powers $a^1,\ a^2,\ a^4, ...a^{log(N)}$.

So we only need to know a fast way to compute those. Luckily this is very easy, since an element in the sequence is just the square of the previous element.

So to get the final answer for $3^{13}$, we only need to multiply three of them (skipping $3^2$ because the corresponding bit in n is not set): $3^{13}$= 6561⋅81⋅3 = 1594323

The final complexity of this algorithm is O(logn): we have to compute logn powers of a, and then have to do at most logn multiplications to get the final answer from them.

The following recursive approach expresses the same idea:

### Implementation

```
// binpow(a, b) => a^b
long long binpow(long long a, long long b)
{
if (b == 0)
return 1;
long long res = binpow(a, b / 2);
if (b % 2)
return res * res * a;
else
return res * res;
}
long long binpow(long long a, long long b) {
long long res = 1;
while (b > 0) {
if (b & 1)
res = res * a;
a = a * a;
b >>= 1;
}
return res;
}
```

## Now the Coolest Question!

Problem: Find the $n^{th}$ fibonacci number.

where we have the matrix $M$ as