Topic 21 – Introduction to Computational Thinking

Why do I need to learn about computational thinking?

Computational thinking is a fundamental tool for understanding, implementing, and evaluating modern theories in artificial intelligence, machine learning, deep learning, data mining, security, digital image processing, and natural language processing.

What can I do after finishing learning about computation thinking?

You will be able to:

  • use a programming language to express computations,
  • apply systematic problem-solving strategies such as decomposition, pattern recognition, abstraction, and algorithmic thinking to turn an ambiguous problem statement into a computational solution method,
  • apply algorithmic and problem-reduction techniques,
  • use randomness and simulations to address problems that cannot be solved with closed-form solutions,
  • use computational tools, including basic statistical, visualization, and machine learning tools, to model and understand data.

These skills foster abstract thinking that enables you not only to use technology effectively but also to understand what is possible, recognize inherent trade-offs, and account for computational constraints that shape the software you design.

You will also be prepared to learn how to design and build compilers, operating systems, database management systems, and distributed systems.

That sounds useful! What should I do now?

First, please read this book to learn how to apply computational methods such as simulation, randomized algorithms, and statistical analysis to solve problems such as modeling disease spread, simulating physical systems, analyzing biological data, optimizing transportation, and designing communication networks: John V. Guttag (2021). Introduction to Computation and Programming using Python. 3rd Edition. The MIT Press.

Alternatively, if you want to gain the same concepts through interactive explanations, please audit the following courses:

After that, please read chapters 5 and 6 of the following book to learn about the theory of computing and how a machine performs computations: Robert Sedgewick and Kevin Wayne (2016). Computer Science – An Interdisciplinary Approach. Addison-Wesley Professional.

Alternatively, if you want to gain the same concepts through interactive explanations, please audit the following courses: Computer Science: Algorithms, Theory, and Machines.

After that, please read the following book to learn what is going on “under the hood” of a computer system: Randal E. Bryant and David R. O’Hallaron (2015). Computer Systems. A Programmer’s Perspective. Pearson.

After that, please audit this course to learn how to build scalable and high-performance software systems: MIT 6.172 Performance Engineering of Software Systems, Fall 2018 (Lecture Notes).

Terminology Review:

  • Algorithms.
  • Fixed Program Computer, Stored Program Computer.
  • Computer Architecture.
  • Hardware or Computer Architecture Primitives, Programming Language Primitives, Theoretical or Computability Primitives
  • Mathematical Abstraction of a Computing Machine (Turing Machine, Abstract Device), Turing’s Primitives.
  • Programming Languages.
  • Expressions, Syntax, Static Sematics, Semantics, Variables, Bindings.
  • Programming vs. Math.
  • Programs.
  • Big O notation.
  • Optimization Models: Knapsack Problem.
  • Graph-Theoretic Models: Shortest Path Problems.
  • Simulation Models: Monte Carlo Simulation, Random Walk.
  • Statistical Models.
  • K-means Clustering.
  • k-Nearest Neighbors Algorithm.

After finishing computational thinking, please click on Topic 22 – Introduction to Machine Learning to continue.

 

(Visited 202 times, 1 visits today)

Leave a Reply