What is Combinatorial Game Theory?

Combinatorial game theory is a branch of theoretical computer science and mathematics that studies sequential games. The study has been mainly confined to two-player games with a position in which players take turn changing. Traditionally, the combinational game theory has not studied games of chance or games that use incomplete or imperfect information, favoring games that provide complete information in which set of available moves and state of the game is always known by both players. As mathematical techniques advance, however, the types of games that can be analyzed mathematically expands, therefore, the boundaries of the field are constantly changing.
Examples of combinatorial games are Go, Checkers, Chess, and Tic-tac-toe. The first three games are categorized as non-trivial while the last one is categorized as trivia. In combinatorial game theory, the moves in these games are represented as a game tree. Some one-player combinatorial puzzles (such as Sudoku) and no-player automata (such as Conway’s Game of Life) are also categorized as combinatorial games.
In general, game theory includes games of imperfect knowledge, games of chance, and games in which player’s moves simultaneously, tending to represent actual decision-making situations.
An important notion in combinatorial game theory is that of solved game. Tic-tac-toe, for example, is considered a solved game because it can be established that if both players play optimally, a game will result in a draw.
References
https://www.ics.uci.edu/~eppstein/cgt/
http://math.uchicago.edu/~ac/cgt.pdf
https://www.math.kth.se/matstat/gru/sf2972/2015/gametheory.pdf

Advertisements

All You Should Know About Word Processors

A word processor is a computer or electronic device software application that carries out the task of composing, formatting, editing, and printing of documents. In the 1960s, the word processor was a stand-alone machine. It combined the keyboard text-entry and printing tasks of a typewriter with a simple dedicated computer processor for text editing. Although designs and features varied among models and manufacturers, and some features were added later, word processor featured monochrome display. Later models introduced spell-checking programs and enhanced formatting options.
As the more multi-purpose combination of printers and personal computers increased, and computer software apps for word processing became popular, nearly all businesses stopped making dedicated word processor machines. There were only two companies in the U.S that were still manufacturing them as of 2009. For the last seven year, Sentinel has been selling a machine known as a “word processor”. However, that machine is a highly specialised microcomputer used for publishing and accounting.
In office productivity, word processing was among the earliest apps for the personal computer. It was also the most widely used on personal computers until mid-1990s when the World Wide Web rose to prominence.
Today, most modern word processor uses a graphical user interface, providing some form of “WYSIWYG” (what-you-see-is-what-you-get) editing. Almost all of them are powerful systems that consist of one or more programs that produce a combination of text, graphics, and images. Basic features of modern word processors include spell checking, built-in thesaurus, grammar checking, automatic text correction, a built-in thesaurus, Web integration, pre-formatted publication projects HTML conversion, and much more.
References
http://www.webopedia.com/TERM/W/word_processing.html
https://www.britannica.com/technology/word-processor
https://www.computerhope.com/jargon/w/wordssor.htm

Calculus and Analysis in Mathematics

Calculus entails the study of change. It is commonly divided into two main branches: differential calculus and integral calculus. The two branches are related by the fact the integration and differentiation are inverse.
Mathematical analysis is the branch of pure mathematics that not only covers integral and differential calculus but also covers measure, infinite series, analytical functions, and limits. If you begin to study calculus, your success depends on your current knowledge and previous experience of both geometry and algebra.
When studying calculus and analysis, student’s ideas and knowledge about functions and their ability to work with algebraic expressions are important, as are their ideas of similarity, ratio, gradient, right-angled triangle, measure, and circle geometry. Students should not only be able to interpret graphs of functions but also know about trigonometric function, rational functions, and the relationship between logarithms and powers.
If students are taught calculus starting with the epsilon–delta definition of limits, they encounter difficulties. This has led to the development of other teaching approaches. One approach to differentiation referred to as ‘locally-straight,’ is based on the idea of magnifying a part of a graph of a function to see it approximate to a straight with a slope that can be measured. The ‘accumulation’ idea (a quantity described by its rate of change) is recommended for use when teaching integral calculus. These two approaches exploit computer environment to tackle multiple representations (symbolic, numeric, and graphical) of mathematical functions.
References
http://www.nuffieldfoundation.org/key-ideas-teaching-mathematics/calculus-and-analysis
http://www.math.harvard.edu/~shlomo/docs/Advanced_Calculus.pdf
http://mathworld.wolfram.com/Analysis.html

Misconceptions Surrounding the Art of Programming

They are many misconceptions surrounding the art of programming. Many people view programming as a job for the gifted. Other people view it as a career path only for the mathematically inclined or for geeks. Today I will explore three misconceptions about computer programming.
An individual cannot learn programming languages before first mastering mathematics
Many people do not understand the relationship between programming and mathematics. Programmers spend most of their time writing code, not mathematics formula. Therefore, individuals’ knowledge in mathematics is not directly proportional to their programming skills. However, don’t get me wrong, you still need basic algebra.
You must be a genius
It does not matter if an individual’s IQ is 90 or 160, programming depends on your interests, not biological factors. Any person who knows how to communicate can be a programmer. Deep in its core, computer programming is just a language with its own vocabulary and grammar, and its existence is just to help you communicate with machines.
You have to be a graduate to learn computer programming
These days, a person can learn how to program from enthusiastic programmers, thanks to the Internet. You can learn how to program without the help of university lectures. You only need to pick a beginner course in websites such as Codecademy, or visit tutorial sites such as Nettuts+.
References
http://www.hongkiat.com/blog/programming-myth/
https://www.webhostingplanguide.com/5-common-misconceptions-learning-programming/

Reasons for Studying Algebra

OLYMPUS DIGITAL CAMERA

Some students do not like studying algebra. Hopefully, at least some of the reasons I will discuss will help them know that studying algebra is useful.

Algebra will be important in your career
Students can’t get good grades in mathematics without some knowledge of algebra. To access university, college and some apprenticeships, good maths grades are a requirement. Indirectly, therefore, algebra gives students more chances of being able to choose careers which they enjoy.

Algebra enables people to think logically
Study of algebra helps human mind to think logically. Although it will reach a point where you will not be studying algebra every day, your brain will have been accustomed to thinking in a logical way. Thinking logically does not only help people in the workplace, but also in their daily lives.

Modern technology relies on algebra
All modern technology depends on algebra and mathematics- Google, mobile phones, the internet, digital televisions, and satellites would not exist without algebra. When you play a computer game or use a phone, you are relying on other individuals who studied algebra. If you like algebra, you are on your way to getting a job in the fast expanding technology sector.

References
http://www.mathscareers.org.uk/article/10-reasons-for-studying-algebra/
https://demmelearning.com/learning-blog/3-reasons-why-we-learn-algebra/

Why You Need to Think Like a Mathematician

If you want to be a mathematician, you should think like one. The persistent habits of thinking like a mathematician change the way people analyze things. Regardless of your mathematical skill level, thinking like a mathematician will help you to:
Prioritize reason over passion
Mathematical proof depends on a bulletproof and clear set of steps that lead an individual from what is well-known to what is unfamiliar. In areas like economics, individuals fight about conclusions and scientists reverse findings. By contrast, mathematicians rarely reverse a result. This fact-based and dispassionate reasoning helps in politics and business but individuals have to start by knowing that they may reach a conclusion they don’t like.
Know that reasoning depends on assumptions
While scientists seek the truth, mathematicians seek truth relative to preliminary assumptions. They know that a triangle angles add up to 180 degrees only on assumption that one is on a flat plane. When people are reasoning about the world, they should question the starting assumptions.
Value ideas and intuition
People think mathematicians focus on logic. They don’t. They have big ideas that inspire what they research. There is no contradiction between locked-down reasoning and powerful ideas— individuals need the ideas to motivate them, and the reasoning to show they are right.
References
https://withoutbullshit.com/blog/benefits-thinking-like-mathematician
http://www.kevinhouston.net/pdf/10ways.pdf

Tips on How to Learn Programming Faster

Learning to program is not something an individual can do in a few hours. However, it does not have to be your life’s work. There are many things you can do to make learning to program faster. The following tips will help you to get most out of learning how to code.
Before moving on, get it right
Don’t move fast through any part of the course. Ensure you have a strong grasp of fundamentals. At the same time, ensure you are making progress—you can go too fast as well as too slow. Don’t avoid a topic because you have mastered something in it. To cement your grasp on the basics, you need to face more challenging ideas.
Looking at the example code
If you’re learning how to program for the first time, ensure you look at, and attempt to understand, all examples. Before you read the text, read the code examples. And try to understand what the programmer did.
Run the code after reading it
When reading a programming book (or tutorial), it is easy to look at example code and say “I have understood it.” Of course, it is possible you’re getting it, but you don’t know it. To find out if you are learning, do something with that code.
References
http://www.cprogramming.com/how_to_learn_to_program.html
http://www.codingdojo.com/blog/7-tips-learn-programming-faster/

Mathematics of Patterns

Patterns are consistent and recurring sequences and can be found in sets of numbers, events, shapes, nature and almost everyplace you care to look. Examples of patterns include seeds in a sunflower, geometric designs on quilts, and the number sequence 0;3;6;9;12;….
In a number pattern, the following notation should be used:
The 1st term in a sequence is T1
The 5th term in a sequence is T5.
The 9th term in a sequence is T9.
The general term, nth, is written as Tn. If a sequence follows a pattern, you can calculate any term by using the general formulae. Therefore, if you can find the relationship between the term’s position and its value, you will be able to describe the pattern and discover any term in the sequence.
Some sequences have a constant difference between two successive terms. This phenomenon is referred to as a common difference. The common difference is often denoted by d. For example, in sequence 10; 7; 4; 1;…, the common difference is 3. To find the common difference, the difference between two successive two must be known (d=T2-T1)
The difference between Tn and n should be noted. N is like place holder that indicates the position of the term in a given sequence. On the other hand, Tn is the value of the place that is held by n.
References
https://www.siyavula.com/maths/grade-10/04-number-patterns/04-number-patterns-01.cnxmlplus
https://www.learner.org/teacherslab/math/patterns/number.html

Three Programming Languages That are Difficult to Learn

While many people have been reporting the world’s easiest programming languages to learn, there is another part of languages that can drive you nuts. We all started to program by writing codes in languages such as C++, C, Java, etc. Our seniors used languages such as COBOL, Fortran, and Pascal which are considered a little bit more difficult. Today, I will discuss three programming languages that can push your brain to the limit.
Brainfuck
As the name suggests, this language is very difficult. It was invented in 1993 by Urban Müller, in an attempt to make a programming language that could be used to write the smallest compiler for the Amiga OS, version 2.0. It runs on an array of memory cells, each at first set to zero. The language has only eight commands.
COW
This programming language was made with the bovine in mind. Since a cow has limited vocabulary skills, it is natural to include only the words it knows in the language. Therefore, all instructions are just variations of “moo,” the only word it seems to understand. Any other symbol or word that isn’t an instruction is entirely ignored.
Whitespace
Whitespace was released on 1 April 2003 and people believed it was an April fool’s joke. In this language, only tabs, linefeeds and spaces have a meaning. The language interpreter ignores all non-whitespace characters.
References
https://www.techworm.net/2016/05/worlds-five-difficult-progamming-languages-learn.html

Coding Theory

Coding theory is the study of codes properties and their respective fitness for particular applications. Codes are mainly used for cryptography, data compression, networking, and error-correction. Codes are studied by many scientific disciplines —such as computer science, mathematics, and electrical engineering—for the purpose of creating reliable and efficient data transmission methods. Typically, this involves redundancy removal and the detection and correction of errors in the transmitted data.
There are four types of coding. These are source coding (or data compression), channel coding (or error correction), cryptographic coding and line coding. Source coding attempts to compress data to transmit it efficiently. For instance, data files are compressed by zipping data to reduce internet traffic.
Error correction increases data bits to make data transmission more robust to disturbances available on the transmission channel. Many users may not be aware of numerous applications that use error correction. A music CD has the Reed-Solomon code that corrects for dust and scratches. The transmission channel in this application is the CD itself. Also, cell phones employ coding techniques to correct for high-frequency radio transmission. Telephone transmission, NASA, data modems all use channel coding techniques to have the bits through.
References
http://mathworld.wolfram.com/CodingTheory.html
https://www.tcs.ifi.lmu.de/teaching/ws-2016-17/code