**Computer Science** – familiar to some, an unknown to many. Ever wondered what’s going on behind your computer screen, beyond the lines of code? We hosted an Intro to Computer Science workshop at BrainStation Toronto with our Product Architect – here are some key takeaways.

**What is Computer Science?**

• It is the study of how to compute things. It isn’t the study of computers, it’s the study of how to design the tasks that computers undertake; computation.

• Computer science is an incredibly broad field filled with a myriad subjects.

• Topics in or related to computer science: computation theory, information theory, algorithms, data structures, language theory, formal methods, ai, numerical analysis, computer vision, networks, databases, and software engineering.

**Why Should I care**

• Computers are ubiquitous

• CS lends itself to developing problem solving

• Programmers are empowered

• Uncharted Creative Territory

**Computability**

• Computers can not do everything.

• There are more things that computers can not do than can. The space of computable problems is smaller than the real space of uncomputable problems.

• Computers’ powers are not infinite.

Halting problem (Constructing a program to determine whether any given program will halt) is a famous incomputable problem. (Alan Turing, diagonalization proof)

**Complexity**

• Computable problems break down into further subcategories based on computational difficulty.

• All problems fall into Decision or Search Problems. (Boolean or Search)

• Complexity can be defined in a number of terms, space or time being one of them. Often there are trade offs when optimizing algorithms for either space or time.

• Problems of the same class are reducible to the same problem. If one were to find a more optimal way of computing one problem of a complexity class it will serve as an optimal way of computing all problems of said complexity class.

**The classes:**

P is polynomial time

EXP is exponential time

NP non deterministic polynomial time

NEXP exponential nondeterministic polynomial time

The great question is wether P = NP, formulated by Professor Cook of the University of Toronto. The significance of this question is that if all problems of NP time were to be reducible to P time efficiency, the computation in the world would take reduce significantly. The questions arose when deterministic finite automata (DFAs) were found to be reducible to Non-Deteministic Finite Automoata (NFA) prompting the question in academia.

**Polynomials – Math Stuff**

• Computational complexity can be expressed in terms of mathematics. Program runtimes can be reduced to mathematical expressions. Like Polynomials, exponential functions, etc.

• Most applied software engineering will fall within the polynomial range. Polynomials are simple functions with static coefficients multiplying different orders of some variable.

**Time**

• In CS the runtime of a program is measured in steps, for example number lines.

• This isn’t everything, however. Often in code, code loops. Loops multiply the number of lines run inside said loop by the number of times run.

• …This is intuitive when you think about it.

However, the number of lines of a program often changes based on how much information is passed into the program. This is why we measure the time complexity of programs relative to their input. For example, input is length/size N then program will take 10N + 10 steps.

**Polynomial Time**

• So, by the point above we understand how we can describe programs as polynomials and how we attribute them.

• However in CS we are only concerned about the highest order term in a programs run time (highest order = biggest power of X).

• The slower orders don’t end up factoring in largely at all to run time nor does the coefficient attached to said order.

• Thus if you are analyzing the Big O run time of a program (Big O being the important worst case runtime of a program) that is complexity 6N^3 + N + 1: The Big O of said code is O(n^3)

• For certain sizes of N (n = 1, 2 ,3 4,5, etc), whatever makes sense for the specific problem some run times that have smaller BIG O complexities will actually have smaller runtime.

• This is why in formal proofs we pick a certain n where the greater BIG O will begin to always be bigger than the other.

• When ranking programs the program with the smallest BiG O are the most applicable to the given problem.

**Classification of P-time:**

O(log n) – logarithmic

O(n) – linear

O(n^2) – quadratic

O(2^n) – exponential

**Searching**

• Imagine a list of elements from 1,2,3… N.

• What are the different methods of Search through it for an element?

• O(n): search through list index by index. This is stupidly inefficient. Just dumb.

• O(log n): sort the list in a binary tree. Each node of tree has 2 children. Children on the left of the node are smaller than node element. Children on the right of the node are smaller than node element. Search through the tree for the desired element now.

• If the current node you’re looking on is smaller than the number being searched for than you know the result is in the right of the tree. Half of the array has just been eliminated.

• Why is this log n? Consider that every time you don’t find the number you are looking for half of the search space is removed. Every removal reduces the search space exponentially, like a logarithm.

**Duplicates:** Finding duplicate content within a set (assume unsorted set)

• O(n^2): Naive duplicate algorithm.

• For each thing, search over each thing in order.

• This is horribly inefficient. It compares each list item to each other list item iteratively.

• O(n): Implementation using Hashing.

• For each element in the set, place each element in a special shelf for that value.

• If there is already a value in said shelf than said element is a duplicate.

• This uses a hash map. A data structure that stores elements in a structure that can be access in constant time. The location said element ends up in depends on the value of the element and the hash function (value to location in hash translation algorithm).

• The solution sacrifices space for time.

**Fibonacci 1, 1 , 2 , 3, 5, 8, 13, 21**

• A very simple sequence that reappears constantly throughout nature.

• The function works by providing seed values of 1, 1 for the first two index and then the result of each subsequent index is the addition of the previous two indices.

• O(n^2): compute it recursively. Compute the answer from the first two given values and work towards the desired value. This problem is well reducible to a recursive solution.

• Recursive being a program that calls itself. As to such each time the program computes a new value of fib that is not the desired index it calls itself again.

• if draw as a tree the 2^n structure is easily visible (left as an exercise).