Posts Tagged ‘Math’

Really Big Numbers

July 14, 2016

I am going to talk about some cool math tonight.  Basically I am just going to be explaining some ways to talk about big numbers that most people have never heard of.  It might be kinda short because I need to get to sleep fairly soon, but I hope it will be interesting.

To start, lets establish a pattern.  One of the simplest things you can do with a number is to count up by one.  To go to the next whole number bigger.  Its how we first learn to add numbers, by counting up a number of times equal to the amount we are adding.  We want to add 4 to 3, we count 3, 4, 5, 6, 7.  When you add a number, it means you are counting up that many times.  So we can use counting in order to understand how to add.

Next we can use adding to learn to multiply.  If we want to multiply a number, we are just adding that number a number of times equal to what we are multiplying by.  So if we multiply 3 by 4, we are saying 3+3+3+3.  We are adding 3, 4 times.  So we are using addition in order to understand how to multiply.

After multiplication is exponents, or powers.  If you want to take a number to the power of another number, that just means you are multiplying by that number a number of times equal to the second number.  So if we take 3 to the fourth power, we are just multiplying 3*3*3*3.  We are multiplying by 3, 4 times.  So we are using multiplication in order to understand how to take powers.

Now most people know how to do all of these things, even if they might be a bit hazy on powers, and if they stop to think about it, it is pretty obvious they are all built on top of one another.  When they are all presented in an ordered fashion like this however, it becomes easy to see that the pattern can continue.  Let me show you the next step then.

After powers, is an operation called tetration.  It is so called because it is the 4th level of operation, if you start with addition as the basic operation instead of counting.  If you want to tetrate a number by another number, you simply take that number to the power of itself a number of times equal to the second number.  So if you want to tetrate 3 by 4, then you would have 3^3^3^3, where ^ is the symbol commonly used on computers for powers due to the difficulty of using the normal superscript notation for powers.  We are going to the power of 3, 4 times here. Thus we are using powers in order to understand tetration.

A couple of interesting things to note in terms of tetration.  First, the representation.  It is usually looks like taking to the power, except the small number is in front of the number instead of after it.  So the tetration of 3 by 4 discussed above would look like a tiny 4 hanging in the air followed by a normal sized 3.  If you want to write this out on a computer, you use two ^s, so the above 3 tetrated by 4 would be 3^^4.  Another important thing to understand is that when you are taking the powers in order to find the answer, you start at the highest point and work down instead of working up.  As an example, if we had 3 tetrated to the third, it would look at first like 3^3^3, then 3^27, because we solve the furthest out operation 3 cubed equals 27, so we replace the 3^3 at the end with 27.  Then we would take 3 to the 27th power, which works out to be 7,625,597,484,987.  If we worked the other direction instead, and changed 3^3^3 into 27^3, we would only get an answer of 19,683, which is a fair bit smaller.  So its important that you go from the top down if you are using normal notation, or from right to left, if you are using the computer notation.  The next notable thing is how fast tetration grows.  Lets look at some examples.

2 tetrated by 2 = 4

2 tetrated by 3 = 16

2 tetrated by 4 = 65,536

2 tetrated by 5 = 2.00353 × 1019,728  This is a number with almost 20,000 digits.  If I wrote it out longhand it would be longer than the rest of this article all together by a significant amount.

3 tetrated by 2 = 27

3 tetrated by 3 = 7,625,597,484,987

3 tetrated by 4 = A number with 3 trillion digits.

3 tetrated by 5 = A number who’s number of digits is not even expressible in standard notation.

4 tetrated by 2 = 256

4 tetrated by 3 = 1.34078 × 10154 A number with 154 digits

4 tetrated by 4 = A number who’s number of digits is a number with 153 digits.

4 tetrated by 5 =  A number who’s number of digits has a number of digits not expressible by standard notation.

As you can see, these numbers get pretty absurd, pretty fast.  Still, tetration is obviously not the end of what you can do.  You can do something called pentation which is the next step, taking a number and tetrating it a number of times equal to the second number.  This can extend indefinitely, and it does not make sense to keep coming up with unique notation for each potential operation.  Because of this, a guy called Donald Knuth, famous in the programming world fro writing a series of books on the fundamentals of algorithm creation, created a notation called Knuth’s Up Arrow Notation.  Basically it involved putting a number of upwards facing arrows between the two numbers, with the number of arrows indicating which operation you are using.  One up arrow is powers, two up arrows is tetration, three is pentation, four is the next one after that.  While the ideal depiction of the arrows includes a full arrow, online we simply use the ^ symbol for the arrow.  So ^ is powers, ^^ is tetration, ^^^ is pentation and so on.  As you can see, we have already been using Knuth’s Up Arrow Notation with the computer depictions.

One problem with Knuth’s up arrow notation is that depending on what you are doing, the number of arrows can sometimes make the notation unweildy.  If you are doing the 10th up arrow operation, you don’t want to be writing ^^^^^^^^^^ each time between your numbers.  As such, once you get past five or six arrows, it is conventional to simply write the number of arrows in a superscript next to your arrow.  Online you put the number of arrows in parenthesis after the initial carrot symbol.  So 3^^^^^^4 could alternately be represented as 3 ^(6) 4.  Using this notation we can now talk about one of the biggest numbers ever actually useful for anything, a number called Graham’s Number.

Graham’s Number was a number used as an upper bound for a mathematical proof in the field of combinatorics, the mathematical study of combinations and permutations of things.  The number is generated in 64 steps.  The first step is to take 3 ^^^^ 3, ie 3 taken to the operation one above pentation by 3.  This is a number so incomprehensibly large, that you could not fit it into the universe if you subdivided the entire thing into plank volumes, a unit of volume that is the smallest it is possible to measure.  We call this number G1.  So already we have a number generally beyond comprehension and conventional notation in any sense.  Then to get G2, we take 3 and we put G1 up arrows in between it and another 3.  So using our notation, it is 3 ^(G1) 3.  So this is a number generated by increasing the rate of size increase by an already incomprehensible number.  For the next 62 steps we repeat the procedure, with G3 having G2 up arrows and so on.  At the highest level G64, we have Graham’s Number.  I am not even going to try and explain how big this is.  Each step is increasing the size of this by increasingly incredible incomprehensible increments, and their are 64 of these steps.

So with that explanation of Graham’s Number, that’s where I am going to stop today.  A few things to note before I sign off though.  First, the number that Graham’s Number is an upper bound to has a lower bound of 13.  So that number is somewhere between 13 and Graham’s Number.  Actually, a smaller number was found later, which puts the upper bound at 2^^^6, a still incredibly large number, not expressible in normal numerical notation, but much much smaller than Graham’s Number.  So the answer to the problem is somewhere between 13 and this new number.  Additionally, while often considered the largest number ever useful for anything, it has been supplanted by even larger numbers as time has moved on. Some of these numbers are so large that they are not even expressible using Knuth’s Up Arrow Notation in a reasonably concise manner, and their are other notations for talking about even bigger numbers.  One of the most famous is called Conway’s Chain Arrow Notation, which I may at some point right another post about, but currently am leaving for another day.  If you are interested, look it up.  Anyways, all this is just a math thing that I think is really fun and interesting.  I might try to do another math related post next week if I can find something equally fun.  So long, and happy imagining of numbers beyond imagination.

For want of a Title

July 7, 2016

I originally wrote that on Thursdays I would be writing about computers or programming, and while those are both things that I care about a fair bit, the kinds of things that I would probably talk about most of the time if I were to talk about programming would be pretty technical, and so I think I shall have to figure out something else to fill the Thursday slot.  I shall endeavor today to talk a bit about those topics in a very general sense that should be comprehensible without formal training, but I think on later Thursdays my topic shall change to something else.

I am afraid that I also don’t have some sort of overarching goal for my writing today.  I found myself unable to come up with a title because I didn’t really know what I wanted to write about, and I only started writing after a couple hours of distracting myself with other things because I didn’t know what I wanted to write about.  I think for myself right now it is more important to keep up the streak of writing every day than it is to make sure everything I write is good however, so hence this post.  Basically, expect to today to ramble on about a few disjointed topics, and potentially to be not very long, because the power just died at my house and I only have a limited battery life to write this with.  With those warnings out of the way, lets jump into the soup!

I suppose the easiest first thing to talk about is the topics in the world of computer science that I am currently interested in.  I have been trying to learn how to make something called a compiler on one hand, and on the other hand I have been endeavoring to understand and use a style of programming called “Functional Programming”.  These have been the general themes of my computer science exploration as of late, though I occasionally leap off into other waters.  I am going to give a basic explanation of each and try to explain why I think they are cool.

First compilers:  A compiler is something that takes a piece of computer code written in some language, then turns that into something that the computer can read or understand.  Its kind of like a translator that goes from a language that is more human readable and able to express more ideas at once, and turns it into a language that a computer can understand, that is broken up into very simple ideas repeated many many times.

As a sort of example, lets pretend we have a computer that understands how to do a few things.  It can read a number, it can increase a number by one, it can check if two numbers are equal, it can save a number, and it can print out a number.  If we wanted to make a computer that only understood how to do those five things learn to add, we would need to go through a bunch of steps.  We would have to read in the first number we wanted to add.  Then we would need to save that number.  Then we would would need to read in the second number we wanted to add.  We would need to save that number as well.  Now to add the two together, we would need to increase the first number by one a number of times equal to the second number.  In order to do that, we would have to add one to the first number, then add one to another number that started at zero.  Then, we would check if the number that started at zero was equal to our second number.  If it is, then we know we successfully added the first and second numbers together.  If it isn’t than that means we still need to keep adding one.  We just keep repeating the three steps, add 1 to the first number, add 1 to the number that started at 0, then check if the number that started at zero equals the second number.  Once we have done all of that, then we have successfully added two numbers together.

As you can see the way computers do things can be both a bit hard to follow, due to the way their thinking is different than ours, and incredibly boring.  If you are telling the computer what to do, you don’t want to tell it add 1 five times each time you want it to add five.  So what you can do is make a compiler that understands something closer to the way a human thinks, and have that translate it into the way that the computer understands it.  In our example, our compiler could take something like “3 + 6” and turn it into the series of many simple instructions I mentioned earlier that would add six to three.  Writing a compiler involves a lot of steps, as you have to find ways to convert your new language into the old one, interpret the inputs in the correct way, and sometimes even change the actual instructions into something that will output the same thing but faster.  In the example above, the second number determines how many times you have to add 1, so you always want the smaller of the two numbers to be the second, and the larger the first.  Doing 3+6 would be 3+1+1+1+1+1+1, but if you flip them around, it could be faster, 6+1+1+1, and a well written compiler could figure that out, and write the machine code in the second way, even though we wrote “3+6” in our language.

Anyways, learning how to do this, how to help a computer translate languages is something that is interesting to me on a couple levels.  One, I spend a lot of time programming, and I think having an understanding of the process my code goes through in order to be understood by the computer is valuable to me.  Additionally, being able to write a compiler means that should the mood strike me, I could invent and implement my own computer language.  There are a great number of silly programming languages out there, that were written to be intentionally absurd, and making my own brand of silly language, that looks perhaps like a magic spell or a cooking recipe instead of a piece of code sounds enjoyable to me.  Also potentially I would have a specialized project where making a language specifically for interacting with that project would make sense, and knowing how to construct a compiler would enable me to at least consider that option.  One must never forget the wise words of GI Joe.

Our second topic then is the idea of “Functional Programming”.  I am not confident in my ability to explain this particularly well, but I will do my best.  In order to understand what functional programming is, first its important to understand what a function is in mathematics.  In math, a function is process that takes some number of numbers and gives one specific output depending on the inputs.  f(x)=x+1 is a function, named f, that takes one input, called x in this case, and then outputs that number plus one.  So if we put 3 into our function f we would get 4, if we put 8 in we would get 9.  We would usually represent this with f(3)=4, or f(8)=9.  You can see that you are taking the function f of 3 or the function f of 8 in those examples and it is telling you what the output is for those numbers.  If we made another function called g we could define it g(x)=2*x.  In this case if we take a number and put it into function g, we would get the number out multiplied by 2.  So g(3)=6 and g(8) = 16.  Functions can take more than one input, for example, you could have h(x,y)=x+y.  In this case, the function called h takes 2 numbers, and outputs one number, which is those two added together.  So h(1,2)=3, h(8,5)=13.  The important thing about a function is that it takes some number of inputs, and for each combination of inputs, it has exactly one output.  If you take f(9), that is going to 10 every time.  No matter how many times you take f(9) the answer will always be the same.

In more advanced math, and in programming the concept of a function is expanded to more than just numbers, and they can have better names than f, g, or h.  If I am writing a program, I can make a function that takes a word as an input and outputs a sentence, or takes numbers and outputs shapes.  A simple example would be a function called hello that accepted a name, and then greeted you.  You could define it as hello(name) = “Hi ” name ” how are you?”.  So if we put in Jack we get hello(Jack) = “Hi Jack how are you?”.  That’s a very simple example, but things like that are all over the place in programming.  It can be very nice to make a function to do something that you are going to want to do many times.  It also makes it easier to understand what you are writing if you use functions with names that tell you what they are doing.  If I wrote a piece of code that looked like this: makeUpperCase(“hi tim”), you could be pretty sure the output of that function would be “HI TIM”.  If you name your functions well, it can make your programs easy to read and understand, which is very useful for when you are working with other people, or even if you come back to your code after a few weeks or months away from it.

Alright, so I think I have explained basically what a function is at this point, now the problem is explaining what exactly makes something Functional Programming, because programming in any language more complicated than machine code usually has functions in it.  Is all programming Functional Programming then?  Not exactly.  What Functional Programming is somewhat dependent on comparing it to the other alternative, which is called Imperative Programming.  Now the way that they are described as being different, is that functional programming involves describing what your program does, and imperative program involves describing how to do it.  What this means in practice, is that there are certain programming techniques that are typically used in Imperative programs and imperative languages, such as loops, objects, and modifiable variables, that are eschewed by function programs and Functional languages, in favor of more focus on function use, and the ability to have functions create functions, and the ability to use bits and pieces of functions.

There are a lot of different reasons why each of the two different styles are used, and they are generally each better in different situations.  The main reason why their has to be a divide, is that functional programming languages use concepts in the mathematics of functions in order to make some of their advanced features work, or at least to allow the program to check that you are using them correctly, and the imperative techniques, while powerful, and useful in their own context, make functions in programs different enough from functions in math that the techniques can’t be ported over, and some of the powerful features of functional programming can’t be used.  I am afraid that the discussion of what exactly these techniques are is a bit beyond my ability to explain effectively, but suffice to say that the idea that a function has exactly one output for each set of inputs is important for the functional techniques to work, and that certain imperative techniques make that not true, ie you can input the same thing at a different time, and a different answer comes out.  The imperative techniques that cause this to happen, specifically object oriented programming, are very powerful, and so are used in situations where they are more important than the advanced powers of functional programs and visa versa.

Anyways, the reason I am so interested in this, is that for the longest time I was not aware of the difference between these things.  When I was taught in college, I learned imperative programming exclusively, and some of the ideas that are unique to imperative programming were explained as being fundamental to effective programming.  From my understanding this was due to the fact that a large number of the types of programs the industry needed coded in the 90s and 00s were ones done more effectively with an object oriented, imperative style.  Since the different styles are really different in the way they make you think about solving a problem, it was likely deemed useful to immerse us in the style that would be useful in the industry.  In recent years there have been a rise in the areas where functional programming is more effective, and so it, and those spreading its gospel have become more common.  I heard about it, decided I wanted to learn about it, and so I have begun on that journey.  Some of the stuff you can do with functional programming is really really cool, and so I am glad I have started learning about it, and I hope that if you have any interest in programming, you consider learning a bit about both styles before getting yourself stuck into one or the other.

Anyways, those are the things I have been spending time on recently in the world of programming.  I hope I was able to explain stuff reasonably well so that it wasn’t incomprehensible.  I also hope that my inability to decide on the capitalization of words has not driven anyone to insanity.  Anyways, tomorrow is a post on my other blog, and then Saturday will be something creative.  I will be trying to think of a new topic to potentially replace programming for next week since I had a hard time thinking about what to write that would be both interesting and would not require a large amount of background knowledge.  If I think of something cool in the world of programming to write about, maybe I will continue with this next week, but if I come up with something else, well that’s OK too.