0% found this document useful (0 votes)
13 views

Csc 401 Lesson 2

Uploaded by

samw90524
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Csc 401 Lesson 2

Uploaded by

samw90524
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Asymptotic Analysis of algorithms (Growth of function)

Resources for an algorithm are usually expressed as a function of input. Often this
function is messy and complicated to work. For effective study of Function growth, the
function should be reduced to only important part.

Let f(n) = an2+bn+c

In this function, the n2 term dominates the function, that is when n gets sufficiently large.
In function reduction, we are interested in dominate terms, because they determine the
function growth rate. Thus; we ignore all constants and coefficient and look at the highest
order term concerning n.

Asymptotic analysis
It is a technique of representing limiting behavior which can be used to analyze the
performance of an algorithm for some large data set.
In algorithms analysis (considering the performance of algorithms when applied to very
large input datasets), The simplest example is a function
ƒ(n) = n2+3n,
the term 3n becomes insignificant compared to n2 when n is very large. The function "ƒ
(n) is said to be asymptotically equivalent to n2 as n → ∞", and here is
written symbolically as
ƒ (n) ~ n2.
Asymptotic notations are used to write fastest and slowest possible
running time for an algorithm also known as 'best case'and 'worst case' scenarios
respectively.
"In asymptotic notations, we derive the complexity concerning the size
of the input in terms of n . These notations enable us to estimate the complexity of
algorithms without expanding its running cost. This notations compare functions,
ignoring constant factors and small input sizes.

Importance of Asymptotic Notations


a. Provide simple characteristics of an algorithm's efficiency.
b. Allow the comparisons of the performances of various algorithms.
Types of Asymptotic Notations:
Three notations are used to calculate the running time complexity of an algorithm:
Big-O notation:
Big-O is the formal method of expressing the upper bound of an algorithm's running
time. It is the measure of the longest amount of time. The function f (n) = O (g (n)) [read
as "f of n is big-O of g of n"] if and only if there exist positive constant c and such that
f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case
Hence, function g (n) is an upper bound for function f (n), as g (n) grows
faster than f (n)

Examples:
1. 3n+2=O(n) as 3n+2≤4n for all n≥2
2. 3n+3=O(n) as 3n+3≤4n for all n≥3
Hence, the complexity of f(n) can be represented as O (g (n))

Big Omega () Notation


The function f (n) = Ω (g (n)) [read as "f of n is omega of g of n"] if and only if
there exists positive constant c and n0 such that
F (n) ≥ k* g (n) for all n, n≥ n0
Example:
f (n) =4n2+2n-3≥4n2-3
f(n) =5n2+(n2-3)≥5n2 (g(n))
Thus, k1=5
Hence, the complexity of f (n) can be represented as Ω (g (n))
Big Theta (θ)
The function f (n) = θ (g (n)) [read as "f is the theta of g of n"] if and
only if there exists positive constant k1, k2 and k0 such that
k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0
For Example:
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n
k1=3,k2=4, and n0=2
Hence, the complexity of f (n) can be represented as θ (g(n)).
The Theta Notation is more precise than both the big-oh and Omega
notation. The function f (n) = θ (g (n)) if g(n) is both an upper and lower
bound.

ALGORITHM DESIGN STRATEGIES


An algorithm design strategy (or “technique” or “paradigm”) is a general approach to solving
problems step wisely that is applicable to a variety of problems from different areas of
computing.

Reasons for Learning these strategies:


• They provide guidance for designing algorithms for new problems (problems without
known satisfactory algorithm).
• Algorithms classification: Algorithm design techniques enable algorithms classification
according to an underlying design idea; therefore, they can serve as a natural way to both
categorize and study algorithms.
Though algorithm design techniques provide a powerful set of general approaches to algorithmic
problem solving, an algorithm design may still be a challenging task. Some problem may not
have applicable design strategy.
.
Typical Algorithm Design Strategies

• Brute Force
• Divide and Conquer Approach
• Greedy Strategy
• Dynamic Programming
• Branch and Bound
• Backtracking Algorithm

Brute Force
This is a simple technique with naïve approach. It relies on huge processing power and testing of
all possibilities to improve efficiency. A scenario where a brute force search can be used;
suppose you forgot the combination of a 4-digit padlock and still want to use it, the padlock can
be opened by trying all possible 4-digit combinations from0 to 9 to unlock it. That combination
could be anything between 0000 to 9999, hence there are 10,000 combinations. So we can say
that in the worst case, for you to find the actual combination, you have up to 10, 000
possibilities.
The time complexity of brute force is O(mn), which can also be written as O(n*m). This means
that if we need to search a string of n characters in a string of m characters, the no of turns should
be n*m times.
Divide and Conquer Approach
This algorithmic technique is preferred for complex problems. It uses top-down approach
following the underlisted steps as the name implies:
Step 1: Divide the problem into several subproblems.
Step 2: Conquer or solve each sub-problem.
Step 3:Combine each sub-problem to get the required result.

Divide and Conquer solve each subproblem recursively, so each subproblem will be the smaller
original problem. Example is shown in Figure2.

Examples of some standard algorithms that are of the Divide and Conquer algorithms variety.
a. Binary Search: a searching algorithm. ...
b. Quicksort: sorting algorithm. ...
c. Merge Sort : sorting algorithm. ...
d. Closest Pair of Points: The problem is to find the closest pair of points in a set of
points in x-y plane.

Figure 1: Divide and Conquer


Greedy Algorithm
This is algorithm strategy that builds up a solution piece by piece, always choosing the next
piece that offers the most obvious and immediate benefit. It is used to solve optimization
problems in which a set of input values given, are required either to be increased or decreased
according to the objective. Greedy Algorithm solves problems always by the choice of the
option, which appears to be the best at the moment (hence the name Greedy). It may not always
give the optimized solution.
Two stages to solve a problem using greedy algorithm:
a) Examining the list of Items.
b) Optimization
This means that a greedy algorithm selects the best immediate option without proper
reconsideration of its decisions. When it comes to optimizing a solution, this simply implies that
the greedy solution will seek out local optimum solutions, which could be multiple, and may skip
a global optimal solution. For example, greedy algorithm in animation below aims to locate the
path with the largest sum.

Figure2: Greedy Algorithm


With a goal of reaching the largest sum, at each step, the greedy algorithm will choose what
appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second step
and will not reach the best solution, which contains 99.

Examples of Greedy Algorithms


a. Prim's Minimal Spanning Tree Algorithm.
b. Travelling Salesman Problem.
c. Graph – Map Coloring.
d. Kruskal's Minimal Spanning Tree Algorithm.
e. Dijkstra's Minimal Spanning Tree Algorithm.
f. Graph – Vertex Cover.
g. Knapsack Problem.
h. Job Scheduling Problem.

Dynamic Programming
Dynamic Programming (DP) is an algorithmic technique for solving optimization problems by
breaking them into simpler sub-problems and storing each sub-solution for reuse. For instance
when using this technique to figure out all possible results from a set of numbers, the solution
obtained from first calculation is saved and put into the equation later instead of being
recalculated, so it is used for complicated equations and processes, thus it is both a mathematical
optimization method and a computer programming method. The sub-problems are optimized to
find the overall solution which usually has to do with finding the maximum and minimum range
of algorithmic query. DP can be used in calculation of Fibonacci Series in which each number is
the sum of the two preceding numbers. Suppose the first two numbers of the series are 0,1.

To solve the problem of finding the nth number of the series, the overall problem i.e., Fib(n), we
can be tackled by breaking it down into two smaller sub-problems i.e.; Fib(n-1) and Fib(n-2).
Hence, we can use Dynamic Programming to solve above mentioned problem, which is
elaborated in more detail in the following Figure 3:

Figure 3: Fibonacci Series using Dynamic Programming

Some examples of Dynamic Programming algorithms are;


a. Tower of Hanoi
b. Dijkstra Shortest Path
c. Fibonacci sequence
d. Matrix chain multiplication
e. Egg-dropping puzzle, etc

Branch and Bound (BnB) Algorithm


BnB is algorithmic design strategy that solves combinatorial and discrete optimization
problems. Many optimization problems which cannot be solved in polynomial time are solved by
BnB, This algorithm enumerates possible candidate solutions in a stepwise manner by exploring
all possible set of solutions. An important advantage of branch-and-bound algorithms is that we
can control the quality of the solution to be expected, even if it is not yet found. The cost of an
optimal solution is only up to smaller than the cost of the best computed one.
Figure 4: Branch and Bound Algorithm Example

Firstly, a rooted decision tree where the root node represents the entire search space is built. Each
child node is a part of the solution set and is a partial solution. Based on the optimal solution, we
set an upper and lower bound for a given problem before constructing the rooted decision tree
and we need to make a decision about which node to include in the solution set at each level. It is
very important to find upper and lower bound and to find upper bound any local optimization
method can be used. It can also be found by picking any point in the search space and convex
relaxation. Whereas, duality can be used for finding lower bound.

Examples of BnB problems


a. Crew scheduling
b. network flow problems
c. production planning.
d. Traveling Salesman Problem
e. Job Assignment Problem

Randomized Algorithm
Randomized Algorithm strategy uses random numbers to determine the next line of action at any
point in its logic. In a standard algorithm, it is usually used to reduce either the running time, or
time complexity, or the memory used, or space complexity. The algorithm works by creating a
random number, r, from a set of numbers and making decisions based on its value. This algorithm
could assist in making a decision in a situation of doubt by flipping a coin or drawing a card from
a deck.

Input Output
Algorithm

Random Number
Figure 5: Randomized Algorithm Flowchart
The output of a randomized algorithm on a given input is a random variable. Thus,
there may be a positive probability that the outcome is incorrect. As long as the
probability of error is small for every possible input to the algorithm, this is not a
problem
When utilizing a randomized method, keep the following two considerations in mind:
It takes source of random numbers and makes random choices during execution along with the
input. Behavior of an algorithm varies even on fixed inputs.
Two main types of randomized algorithms:
a. Las Vegas algorithms
b. Monte-Carlo algorithms.

Examples of Randomized algorithm


In Quick Sort: using a random number to choose a pivot.
Trying to factor a large number by choosing a random number as possible divisors.

Backtracking Algorithms
This technique steps backward to try another option if current solution fails. It is a method for
resolving issues recursively by attempting to construct a solution incrementally, one piece at a
time, discarding any solutions that do not satisfy the problem’s constraints at any point in time. It
ca be said to use brute force approach which resolves problems with multiple solutions. It finds a
solution by building a solution step by step, increasing levels over time, using recursive calling.
A search tree known as the statespace tree is used to find these solutions. Each branch in a state-
space tree represents a variable, and each level represents a solution.
A backtracking algorithm uses the depth-first search method. When the algorithm begins to
explore the solutions, the abounding function is applied so that the algorithm can determine
whether the proposed solution satisfies the constraints. If it does, it will keep looking. If it does
not, the branch is removed, and the algorithm returns to the previous level.
In any backtracking algorithm, the algorithm seeks a path to a feasible solution that includes
some intermediate checkpoints. If the checkpoints do not lead to a viable solution, the problem
can return to the checkpoints and take another path to find a solution.
The algorithm works as follows:
Given a problem:
\Backtrack(s) if is not a solution return false if is a new solution add to list of
solutions backtrack(expand s)

For example, if we want to find all the possible ways of arranging 2 boys and 1 girl on 3 benches
with a constraint that Girl should not be on the middle bench. So there will be 3! = 6 (3x2x1)
possibilities to solve this problem. All possible ways should be tried recursively to get the
required solution as shown:
Figure 6: Solution of backtracking

Examples of Backtracking algorithm problems:


backtracking:
• Variety of problems: finding a feasible solution to a decision problem.
• Optimization problems.
• Finding all feasible solutions to the enumeration problem.
• Finding all Hamiltonian paths present in a graph
• Solving the N-Queen problem
• Knights Tour problem, etc
Backtracking, on the other hand, is not regarded as an optimal problem-solving technique. It is
useful when the solution to a problem does not have a time limit.

RECURSION AND RECURSIVE ALGORITHM


Recursion is a method of solving problems that involves breaking a problem down into smaller
sub-problems until you get to a small enough problem that it can be solved trivially. In computer
science, recursion involves a function calling itself. With recursion complex problems can be
stylishly programmed.
Recursion in Computer science is a computer programming technique that uses procedure,
subroutine, function, or algorithm that calls itself one or more times until a specified condition is
met at which time the rest of each repetition is processed from the last one called to the first.”
Most programming languages support recursion.

Instances of Recursion
There are two main instances of recursion.
• Recursion as a technique in which a function makes one or more calls to itself.
• A data structure using smaller instances of the exact same type of data structure when it
represents itself.

Importance of Recursion
• It provides an alternative for performing repetitions of the task in which a loop is not
ideal.
• It serves as a great tool for building out particular data structures.

Example of Recursion problem


Factorial function is a good example of recursion.
The factorial function is denoted with an exclamation point (!) and is defined as the product of
the integers from 1 to n. n! can be formally stated as:
n! = n ⋅ (n−1) ⋅ (n−2) … 3 ⋅ 2 ⋅ 1
Note, if n = 0, then n! = 1. This is important to not that if n=0, the n! = 1 because it will serve as
our base case.

Take this example:


5! = 5.4 ⋅ 3 ⋅ 2 ⋅ 1 = 120.
Stating this in a recursive manner is where the concept of base case comes in. Base case is a key
part of understanding recursion.

Let’s rewrite the above equation of 5! so it looks like this:


5! = 5 ⋅ (4.3 ⋅ 2 ⋅ 1) = 120
Notice that this is the same as:
5! = 5 ⋅ 4! = 120
Meaning we can rewrite the formal recursion definition in terms of recursion like so:
n! = n ⋅ (n−1) !
Note, if n = 0, then n! = 1. This means the base case occurs once n=0, the recursive cases are
defined in the equation above. Whenever you are trying to develop a recursive solution it is very
important to think about the base case, as your solution will need to return the base case once all
the recursive cases have been worked through.
Let’s see how we can create the factorial function in Python:
def fact(n):
'''
Returns factorial of n (n!).
Note use of recursion
'''
# BASE CASE!
if n == 0:
return 1
# Recursion!
else:
return n * fact(n-1)

Let’s see it in action! Fact (5) = 120


Take note of the “if statement” that checks if a base case occurred.
Without it this function would not have successfully completed running.
We can visualize the recursion with the following figure:

We can follow this flow chart from the top, reaching the base case, and
then working our way back up.
Recursion is a powerful tool, but it can be a tricky concept to implement.

Conditionals to Start, Continue, and Stop the Recursion


Function with string or array argument. Starting conditions may often be the exact same
conditions that force the recursion to continue.
More importantly, you want to establish a condition where the recursive action stops.
These conditionals, known as base cases, produce an actual value rather than another call
to the function. However, in the case of tail-end recursion, the return value still calls a
function but gets the value of that function right away.
The establishment of base cases is commonly achieved by having a conditional observe
some quality, such as the length of an array or the amount of a number, just like loops.

Laws of Recursion
All recursive algorithms must obey three important laws:
i. A recursive algorithm must have a base case, which denotes the point when it
should stop.
ii. A recursive algorithm must change its state and move toward the base case which
enables it to store and accumulate values that end up becoming the answer.
iii. A recursive algorithm must call itself, recursively with smaller and smaller values.

Types of Recursion
Recursion are mainly of two types:
i. Direct Recursion: when a function calls itself from within itself
ii. Indirect Recursion: When more than one function call one another mutually.

Direct Recursion
These can be further categorized into four types:
a. Tail Recursion:
If a recursive function calling itself and that recursive call is the last statement in the function
then it’s known as Tail Recursion. After that call the recursive function performs nothing. The
function has to process or perform any operation at the time of calling and it does nothing at
returning time.

Example:
// Code Showing Tail Recursion
#include <iostream>
using namespace std;
// Recursion function
void fun(int n)
{
if (n > 0) {
cout << n << " ";
// Last statement in the function
fun(n - 1);
}
}
// Driver Code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output:
321
Time Complexity For Tail Recursion : O(n)
Space Complexity For Tail Recursion : O(n)

Lets us convert Tail Recursion into Loop and compare each other in
terms of Time & Space Complexity and decide which is more efficient.
// Converting Tail Recursion into Loop
#include <iostream>
using namespace std;
void fun(int y)
{
while (y > 0) {
cout << y << " ";
y--; }}
// Driver code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output
321
Time Complexity: O(n)
Space Complexity: O(1)
So it was seen that in case of loop the Space Complexity is O(1) so it was better to write
code in loop instead of tail recursion in terms of Space Complexity which is more
efficient than tail recursion.

b. Head Recursion:
If a recursive function calling itself and that recursive call is the first statement in the
function then it’s known as Head Recursion. There’s no statement, no operation
before the call. The function doesn’t have to process or perform any operation at
the time of calling and all operations are done at returning time.

Example:
// C++ program showing Head Recursion
#include <bits/stdc++.h>
using namespace std;
// Recursive function
void fun(int n)
{
if (n > 0) {
// First statement in the function
fun(n - 1);
cout << " "<< n;
}
}
// Driver code
CIT 310 ALGORITHMS AND COMPLEXITY ANALYSIS
34
int main()
{
int x = 3;
fun(x);
return 0;
}
Output:
123
Time Complexity For Head Recursion: O(n)
Space Complexity For Head Recursion: O(n)

Let’s convert the above code into the loop.


// Converting Head Recursion into Loop
#include <iostream>
using namespace std;
// Recursive function
void fun(int n)
{
int i = 1;
while (i <= n) {
cout <<" "<< i;
i++;
}
}
// Driver code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output:
123

c. Tree Recursion:
To understand Tree Recursion let’s first understand Linear Recursion. If a recursive
function calling itself for one time then it’s known as Linear Recursion. Otherwise if a
recursive function calling itself for more than one time then it’s known as
Tree Recursion.

Example: Pseudo Code for linear recursion


fun(n)
{
// some code
if(n>0)
{
fun(n-1); // Calling itself only once
}
// some code
}

Program for tree recursion


// C++ program to show Tree Recursion
#include <iostream>
using namespace std;
// Recursive function
void fun(int n)
{
if (n > 0)
{
cout << " " << n;
// Calling once
fun(n - 1);
// Calling twice
fun(n - 1);
}
}
// Driver code
int main()
{
fun(3);
return 0;
}
Output:
3211211

Time Complexity For Tree Recursion: O(2n)


Space Complexity For Tree Recursion: O(n)
.
d. Nested Recursion:
In this recursion, a recursive function will pass the parameter as a recursive call. That
means “recursion inside recursion”.
Example:
// C++ program to show Nested Recursion
#include <iostream>
using namespace std;
int fun(int n)
{
if (n > 100)
return n - 10;
// A recursive function passing parameter
// as a recursive call or recursion inside
// the recursion
return fun(fun(n + 11));
}
// Driver code
int main()
{
int r;
r = fun(95);
cout << " " << r;
return 0;
}
Output:
91

Indirect Recursion:
In this recursion, there may be more than one functions and they are calling one another
in a circular manner. From the diagram below fun(A) is calling for fun(B), fun(B) is
calling for fun(C) and fun(C) is calling for fun(A) and thus it makes a cycle.
Example:
// C++ program to show Indirect Recursion
#include <iostream>
using namespace std;
void funB(int n);
void funA(int n)
{
if (n > 0) {
cout <<" "<< n;
// Fun(A) is calling fun(B)
funB(n - 1);
}
}
void funB(int n)
{
if (n > 1) {
cout <<" "<< n;
// Fun(B) is calling fun(A)
funA(n / 2);
}
}
// Driver code
int main()
{
funA(20);
return 0;
}
Output:
20 19 9 8 4 3 1
Recursion versus Iteration
The Recursion and Iteration both repeatedly execute the set of instructions. Recursion is
when a statement in a function calls itself repeatedly. The iteration is when a loop
repeatedly executes until the controlling condition becomes false. The primary difference
between recursion and iteration is that recursion is a process, always applied to a function
and iteration is applied to the set of instructions which we want to get repeatedly
executed.
Features Recursion
• Recursion uses selection structure.
• Infinite recursion occurs if the recursion step does not reduce the problem in a
manner that converges on some condition (base case) and Infinite recursion can
crash the system.
• Recursion terminates when a base case is recognized.
• Recursion is usually slower than iteration due to the overhead of maintaining the
stack.
• Recursion uses more memory than iteration.
• Recursion makes the code smaller.

Features of Iteration
• Iteration uses repetition structure.
• An infinite loop occurs with iteration if the loop condition test never becomes
false and Infinite looping uses CPU cycles repeatedly.
• An iteration terminates when the loop condition fails.
• An iteration does not use the stack so it's faster than recursion.
• Iteration consumes less memory.
• Iteration makes the code longer.

Some examples of Recursive Algorithms


Reversing an Array
Let us consider the problem of reversing the n elements of an array, A, so that the first
element becomes the last, the second element becomes the second to the last, and so on.
We can solve this problem using the linear recursion, by observing that the reversal of an
array can be achieved by swapping the first and last elements and then recursively
reversing the remaining elements in the array.

Algorithm ReverseArray(A, i, j):


Input: An array A and nonnegative integer indices i and j
Output: The reversal of the elements in A starting at index i and ending at j
if i < j then
Swap A[i] and A[j]
ReverseArray(A, i+1, j-1)
return
Fibonacci Sequence
Fibonacci sequence is the sequence of numbers 1, 1, 2, 3, 5, 8, 13, 21,34, 55, .... The first
two numbers of the sequence are both 1, while each succeeding number is the sum of the
two numbers before it. We can define a function F(n) that calculates the nth Fibonacci
number.
First, the base cases are: F(0) = 1 and F(1) = 1.
Now, the recursive case: F(n) = F(n-1) + F(n-2).
Write the recursive function and the call tree for F(5).

Algorithm Fib(n) {
if (n < 2) return 1
else return Fib(n-1) + Fib(n-2)
}
The above recursion is called binary recursion since it makes two recursive calls instead
of one. How many number of calls are needed to compute the kth Fibonacci number? Let
nk denote the number of calls performed in the execution.
n0 = 1
n1 = 1
n2 = n1 + n0 + 1 = 3 > 21
n3 = n2 + n1 + 1 = 5 > 22
n4 = n3 + n2 + 1 = 9 > 23
n5 = n4 + n3 + 1 = 15 > 23
...
nk > 2k/2
This means that the Fibonacci recursion makes a number of calls that are exponential in
k. In other words, using binary recursion to compute Fibonacci numbers is very
inefficient. Compare this problem with binary search, which is very efficient in searching
items, why is this binary recursion inefficient? The main problem with the approach
above, is that there are multiple overlapping recursive calls.
We can compute F(n) much more efficiently using linear recursion. One way to
accomplish this conversion is to define a recursive function that computes a pair of
consecutive Fibonacci numbers F(n) and F(n-1) using the convention F(-1) = 0.

Algorithm LinearFib(n) {
Input: A nonnegative integer n
Output: Pair of Fibonacci numbers (Fn, Fn-1)
if (n <= 1) then
return (n, 0)
else
(i, j) <-- LinearFib(n-1)
return (i + j, i)
}
Since each recursive call to LinearFib decreases the argument n by 1, the original call
results in a series of n-1 additional calls. This performance is significantly faster than the
exponential time needed by the binary recursion. Therefore, when using binary recursion,
we should first try to fully partition the problem in two or, we should be sure that
overlapping recursive calls are really necessary.

Let's use iteration to generate the Fibonacci numbers.


public static int IterationFib(int n) {
if (n < 2) return n;
int f0 = 0, f1 = 1, f2 = 1;
for (int i = 2; i < n; i++) {
f0 = f1;
f1 = f2;
f2 = f0 + f1;
}
return f2;
}

What's the complexity of this algorithm?

Exercises
i. Try and find the Sum of the elements of an array recursively
ii. Find the maximum number of elements in an array A of n elements using
recursion, then iteration. What are their time complexities.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy