Posted by: elenadbutler | January 14, 2009

## Bus-wait and Rubik’s Cubes

Two recent items of note in the NYT:

Recently, a group of Harvard students were featured for their work on the Bus-Wait problem. The problem is this: you’re waiting for the bus, and start to get bored. You wonder if you should just walk. Scott Kominers and friends found that you should almost always wait. If you start walking and catch the bus on your way, you’re just getting on the bus you would have caught by waiting, so that won’t save you time. And if you start walking and you miss the bus, then you’re totally screwed.

Interestingly, if you have a very short journey, and the buses tend come at long intervals (say, half an hour), you should walk. But, since most buses come often and move faster than you can walk, waiting is usually the best option. So go ahead, be lazy.

Another cool article is about Dr. Jessica Fridrich, who invented the most popular method for solving a Rubik’s Cube as quickly as possible. To use her method, the speedcuber (speedcubist? haha) must know and be able to deploy no fewer than 53 algorithms. When using her method, the cuber first solves the top two layers, using the face with the white square in the middle as the top. Then, to solve the bottom layer, the cuber has to get all the yellow squares on the bottom, a process called “orientation,” which has potential 40 algorithms  to use. The last step is called permutation (yay!) and requires the use of one of 13 algorithms to finish off the cube.

As cool as this sounds, to tell the truth, Rubik’s cubes kind of scare me. A friend of mine from high school did Rubik’s cube speedsolving competitions, and watching him was pretty impressive. Have any of you learned to solve a Rubik’s cube? How long did it take? Maybe once this final is over I’ll finally learn!

Posted by: danb | January 13, 2009

With the final coming up tomorrow and the possibility of a proof by contradiction on the exam, I remember one that I saw in high school and would like to share it, both as a review and as an interesting proof.  Does 1=.99999…?  At first I thought it didn’t, but then when I saw the proof I was convinced.  I think it went like this.

Thm: 1=.9999…

Pf: We will use contradiction to show that 1=.9999…

First, assume 1 does not equal .9999….

Let x=.9999….

Therefore, 10x=9.9999…

10x = 9.9999…

–   x =   .9999…

——————

9x = 9

So, x=1.  But we initially said x=.9999….  This is a contradiction from out initial statement, that .9999…. does not equal 1.  Therefore, we have shown that .9999…=1.

Now, not only is this a good review for the exam on how to construct proofs by contradiction, it also can be used to impress friends (and works well on dates).

Posted by: courtneyobrien | January 13, 2009

## Mathematical Modeling

I have a number of friends took Mathematical Modeling this past semester.  For their final project, the students were required to develop a model using the mathematical techniques they had learned over the course of the semester. I thought that many of the projects were truly fascinating applications of math to real world problems, especially the ones described below.

Our very own Sandeep modeled how the introduction of a second lane to a one lane circular track could reduce congestion caused by natural, but random, fluctuations in velocity. To model traffic, he used a cellular automata model based on a grid of cells that was updated at each timestep. Each cell was updated according to some rule that was a function of the states surrounding the cell. After a certain amount of time, he then calculated the average velocity of the cars around the track to determine the effect lane changing had on congestion.

Duncan sought to identify the optimal utilization of non-directed donor kidneys in kidney transaction clearinghouses. Clearinghouses match up donor-patient pairs in order to facilitate kidney donations among family and friends.  For example, if a type A donor can’t donate to his friend, a type B patient, a clearinghouse can match them up with a type B donor who wants to donate to a type A patient.  Occasionally, a “non-directed donor” or “Good Samaritan Donor” will willingly donate his kidney to a clearinghouse.  In this case, there are more donors than patients.  Duncan proposed that rather than using the leftover kidney today, clearinghouses could save the extra kidney in order to catalyze more transactions in the future.  Duncan’s model attempted to identify the optimal blood type of that leftover kidney in order to maximize present and future transactions.  He based his model on the integer programming problem by defining compatibility and assignment matrices C and X, where each element Cij and Xij in C and X=1 if the kidneys are compatible and 0 if they are not.

Ricky developed a NCAA College Basketball ranking model using markov chains weighted according to the point margin of individual games.  His model also more heavily weighted games later in the season.

Parth developed a model for analyzing color heterogeneity in films.  He looked at three measures: 1) comparison of each pixel in a frame to adjacent pixels 2) comparison of each pixel in a frame to a grid of pixels evenly spaced across the rest of the frame and 3) comparison of each pixel to the same pixel in the 2 frames before and 2 frames after it.
He found that factor 1 turned out to be nearly useless, and assigned it very little weight to it.  However, the other two factors were very useful.  He also compared his heterogeneity index to average ratings found on IMDB to see if there was any correlation.  He found that animated movies tend to be a lot more dynamic than live action movies.

Posted by: tejat | January 13, 2009

## Mathemagic!

If you need a quick study break this reading period and facebook is no longer doing it for you, I really recommend checking out ted.com. The main purpose of the TED conference is to promote the spread of ideas, and the website is full of really awesome videos from the conference where speakers and performers come to do just that.  The website boasts all sorts of 20-30 minute lectures and performances, with topics ranging from ending poverty to toys that define our culture to the calculus of architecture.  I came across one video a couple weeks ago that was really incredible that I thought our class might enjoy– This guy can do unbelievable mental math. The most fascinating part, in my opinion, is at the very end, when he thinks out loud as he squares a 5-digit number in his head. To “store” some of the numbers along the way, he converts them into words… I think it would be really neat to figure out what method he uses to do that.

In any case, I hope you enjoy Mathemagic!

http://www.ted.com/index.php/talks/arthur_benjamin_does_mathemagic.html

Posted by: mbuckley56 | January 13, 2009

## Shape of Universe

One of the more interesting topics we touched on in class, in my opinion, was the shape of the universe. I particularly liked watching the video of many earths in all directions. I believe the strangest of these videos was the hyperbolic depiction of the universe.  Because of my fascination with the shape of the universe, I decided to research it more; but, before I discuss my findings, I’d like to start slowly.

Everyone knows that the world is round, now adays. Long gone are the times when we believed it was flat. But, my guess is, that if we asked a non-mathematician what the shape of the universe was, they would extrapolate from their knowledge of the earth and claim that the universe was spherical, or at least “round.” I, for one, believed that the universe was spherical – not that I had thought about it much, though. But, it turns out, that it is actually possible that the universe is flat. However,  there are many theories out there for the shape of the universe, and among them happens to be the spherical.

According to Kate, (I think), the prevailing idea is now that the universe is hyperbolic. This might be a biased belief because I think Kate really likes hyperbolas.  In order to understand a hyperbolic universe, it is helpful to know hyperbolic geometry. According to wikipedia, a hyperbolic universe “can be though of locally as a three-dimensional analog of an infinitely extended saddle shape.” I suppose we can all visualize this, but it seems different from the explanation of a hyperbolic universe that Kate gave in class. I think I am missing something, so could someone please address this. Maybe we talked about hyperbolic symmetries of the universe?

Apparently, scientisits are trying to figure out the shape of the universe, and they do this by looking for mirror images of microwave background radiation from different directions and places in the universe. The results of these tests so far have been to say that the spatial curvature of the universe is small, (close to flat). So, contrary to the likely popular belief, the universe may be flat (almost). Additional information on this subject can be found by reading about the Wilkinson Microwave Anisotropy Probe (WMAP), which is the device that reads this data.

One of the major concerns when discussing the shape of the universe is whether or not it is open or closed. This typically means whether or not the curvature of the universe is positive or negative, respectively.

Another important topic is whether the universe is finite or infinite. There is much literature on the subject. Search “universe is infinite” on google and see. I think the prevailing view is that it is finite.

I don’t really understand most of what I read about the shape of the universe. More math classes are needed to comprehend. Please fill in gaps.

Michael

Posted by: dschadewald | January 11, 2009

## Brain Teasers

In the spirit of on-campus recruiting, I figured I would write a post about brain teasers which could potentially come up in an interview.

1.  Suppose you are at an auction and are bidding on a good whose value is X.  The value of X is is a random value uniformly distributed between 0 and 1000.  If you bid X or anything greater than X you can immediately sell the good for 1.5 times what you paid for it.  If X is greater than your bid, you lose out and do not get anything.  What should you bid?

This seems like a tough sort of question, but it is actually pretty easy if you really think about it.  The answer is that you should bid nothing and just expect to break even.  The reason is that if you ever win the good you are more than 50% likely to lose money.  An easy example demonstrates this:  if you bid 100 and you win the good, then the value of the good is therefore uniformly distributed between 0 and 100.  If the good is worth \$66.6 or less though, you are losing money since you can’t sell it for more than you paid for it.  Thus there is a 66.6% chance that you will actually be losing money.  Change 100 to any number you’d like and you will see that you will lose money.

2. Suppose there is a room with 100 light bulbs labelled 1 through 100.  Person 1 walks in the room and turns on every light bulb.  Person 2 walks in the room and flips the switch on every 2nd light bulb (i.e. he turns off all even numbered light bulbs).  Person 3 walks in the room and flips the switch on every 3rd light bulb (turning some on and some off).  This process continues until person 100 goes through and flips the switch on the 100th light bulb.  Which light bulbs will be lit up at the end?

Again this is kind of tricky but not really.  If you think about it, for a light bulb to be turned on, it must be flipped an odd number of times.  To be flipped an odd number of times, it must have an odd number of factors.  The only numbers that have an odd number of factors are the perfect squares (for example 4 has three factors 1,2,4).  Thus light bulbs 1,4,9,16,25,36,49,64,81,100 will be on at the end.

3.  You have 10 machines that produce 10g gold coins.  After a few years though, you discover that one of the machines isn’t working correctly and produces coins that are 10% lighter (i.e. 9 grams).  How do you figure out which machine is producing the light coins with only one weighing?

This one I think is pretty tough.  One good and practical solution is to label each machine from 1 to 10 and have each machine produce the number of coins that it is labeled (ie machine 1 produces 1 gold coin, machine 2 produces 2 etc.).  Then weigh all the coins at once.  The amount that the coins should weigh is 10g*(1+2+3+4+5+6+7+8+9+10)=550g.   However many grams the total is away from 550g will correspond to the dysfunctional machine.  For example if it weighs 545g, then we know that machine 5 is producing the lighter coins.

Well I hope one of you guys gets one of these questions in an interview.  Good luck!

Posted by: sandeepchrao | January 2, 2009

## Applications of 152 in Secret Santa

(assuming you assign ever person a number)

(9  4  6  7  13)(2  12  3  5   11  1  10  8)         A permutation with 1 cycle of 5 and 1 cycle of 8

The previous year, we had a Secret Santa with the same group and the gift cycle structure had the same structure as this permutation

( 1  7  13 4  2  11  9  12 3  8  5  10  6)

One full cycle of 13 is rare among all the cycle structures possible with the 13! possible permutations. I think we were pretty lucky.

Anyway, thought this was interesting. Happy Holidays.

Posted by: Prof. Kate | December 22, 2008

## Programming Projects

Hi everyone,
Several students did optional computer projects this year.  The first is a finite field calculator, which will multiply elements of a finite field.  These are respresented as polynomials in x, a primitive element.  To find out what polynomial is being used to generate the finite field, try multiply the appropriate power of x.

Finite Field Calculator by Michael Buckley

Finite Field Calculator by David Haley

The second is a permutation calculator.  It will conjugate, multiply and find inverses for permutations on as many as 9 elements.

Permutation Calculator by David Haley

Permutation Calculator by Michael Buckley

Enjoy and happy holidays everyone!

Kate.

Posted by: elenadbutler | December 21, 2008

## Change Ringing!

Happy Holidays Math 152!

A few Saturdays ago, Kate took a couple of us to visit the change ringing rehearsal at Old North Church in Boston. Old North Church houses the oldest bells in North America, which have hung in the church spire since 1745. There are 8 bells, and they are huge! They weigh between 500 and 1300 pounds.

Change ringing and group theory are related because the mechanics of the bells limits the ringers’ ability to control them once they are ringing. In fact, the only changes in the order of pitches heard come from two bells switching places in the order, which is like the transpositions we studied in class. In change ringing, the goal is to ring a peal, or a series of permutations of the pitches in which no permutation is permitted. A full peal is such a series when all the possible permutations of pitches are rung. We learned that such a peal can take a very long time, depending on the number of the bells! Since there are n! possible permutations of n bells, ringing the 40,320 permutations of 8 bells could take almost an entire 24-hour day. A full peal on 6 bells is far more manageable–it only takes 20 or 30 minutes.

Ringers typically memorize methods, patterns of ringing that allow the ringers to go through the permutations without memorizing each individual one. We learned about a simple pattern. You start with bells 1, 2, 3, 4 in that order. Then, you alternate between switching the order of the interior two bells and switching the order of the pairs of outside bells. Sorry if this doesn’t make any sense–let me show you:

1 2 3 4

1 3 2 4 (switch interior pair)

3 1 4 2 (switch LHS and RHS pairs)

3 4 1 2 (switch interior pair)

4 3 2 1

4 2 3 1

2 4 1 3

2 1 4 3

1 2 3 4

(You repeat the 1234 pattern at the end.) Another cool peal we learned about was a full peal of 5 bells, where the 60 even permutations are rung, a transposition (as in bell switch) is called, and the 60 odd transpositions are rung to make up the full peal.

We had lots of fun visiting the change ringers at Old North and even got to take in the view at the top of the spire! If you are interested in learning to ring bells, they urge that you visit them during their rehearsals from 11 to 1 on Saturdays because they are always looking for new people to learn!

Jack learning how to ring:

The view:

“One if by land, two if by sea”:

Posted by: rfgarcia | December 18, 2008

## Casino Night Preview: Markov Chains and Blackjack

This post (last post ever!) assumes some knowledge of Markov chains (Wikipedia).

In 2004, two professors from Rice University modeled various aspects of the game of Blackjack with Markov chains (link). Their model for the dealer’s hand is the simplest, but provides valuable insight. The dealer’s hand can be modeled in 36 states:

1. 10 initial states representing the first card dealt to the dealer: Ace, 2, 3, 4, 5, 6, 7, 8, 9, T (where T is any card worth 10).
2. 13 states representing the dealer holding a hand that is worth a “hard” total (no Ace counted as 11) of 4, 5, …, 16
3. 6 states representing the dealer holding a “soft” hand worth 12, 13, …, 17
4. 5 states representing the dealer standing with a total of 17, 18, 19, 20, 21.
5. 1 state for the dealer holding blackjack.
6. 1 state for when the dealer busts.

The last 7 states are absorbing states–once the process reaches one of them, it stays there. Assuming that the dealer is pulling cards from an infinite number of decks, the probability $d_i$ of the dealer drawing card $i$ is simple:

$d_A = d_2 = d_3 = ... = d_9 = 1/13$; $d_{10} = 4/13$

With these probabilities defined, the 36×36 transition matrix, $P$ can be determined, as well as the 1×36 vector $p_0$ representing the distribution of initial states.  Since the dealer is pulling from an infinite deck, there is a possibility that the dealer could draw 17 aces. Thus, the maximum number of time steps in the Markov process is 17. To figure out the probabilities of the dealer reaching a particular outcome, we can observe the probability of being in one of the absorbing states by calculating $p_oP^{17}$.

Assuming the dealer stands on soft 17, here’s the distribution of dealer outcomes:

Besides being a benchmark, this graph isn’t that useful (we can see one of the dealer’s cards at the casino table). Here’s a conditional distribution of dealer outcomes given that the first card the dealer is dealt is an ace:

Showing an ace, the dealer has only an 11.5% chance of busting! This is part of the reason why basic strategy dictates that you hit any total less than 17 facing the dealer’s ace–you have only an 11.5% chance of winning.