The BBC recently asked Do doctors understand test results? Medical statistics are an important application of a subject students often find uninteresting (precisely because it tends to be taught without considering its applications). But if you're holding the positive result of a blood test in your hand you might wish you'd listened a bit harder in statistics class . . .

To help you understand let's start with a simple, non-medical example.

You want to go out for a walk this afternoon, but you're worried that it might rain. You turn on the television: the forecast is for rain. Should you give up on your walk?

You decide to do a little research. You go to the weather forecaster's website and discover that they claim a 90% accuracy rate: out of 100 days on which it rained, they predicted it would rain on 90 of those days. Sounds pretty good.

Digging a little deeper you discover that out of 100 days on which it did not rain, they correctly predicted it would be dry on 80 of those days. That's not too bad, either.

It looks like the forecaster is pretty reliable and they've said it will rain. You decide to go ahead with your walk but you take your umbrella with you.

However, it's bright sunshine the whole time! You didn't need the umbrella at all!

Why?

Because you didn't use **Bayes theorem**.

You see, it turns out that it rains only 10% of the time where you live. So in 100 days, it rains on 10 of those days. And the weather forecaster, with its 90% accuracy rate, would correctly predict rain on 9 of those 10 days.

However, it doesn't rain on 90 out of 100 days. But the weather forecaster would wrongly predict that it would rain on 20% of these. So on 18 days the forecast would be for rain when it didn't actually rain.

In total then, the weather forecaster predicts rain on 9 + 18 = 27 days out of 100. But on only 9 of those days does it actually rain. So the proportion of days on which it rains when the weather forecaster has predicted rain is 9/27, which is only one third. That's pretty unreliable.

The impressive statistic ("90% accuracy!") on the weather forecaster's website was the answer to the following question: "Given that it rained, what is the probability that the forecast was for rain?"

The problem arose because this question is the wrong way round. What you really want to know is, "Given that the forecast is for rain, what is the probability that it will actually rain?" The statistic here is much less impressive: about 33%.

Why did this happen?

Although the weather forecaster often correctly predicts rain when it actually rains, it doesn't rain very often, so the number of days on which it rains and on which rain is predicted is small (9 days). And although the weather forecaster rarely predicts rain when it doesn't rain, there are many days on which it doesn't rain, so there are many opportunities for an incorrect forecast (18 days out of 100).

Thus a prediction of rain is more often associated with a dry day than with a wet day. And that's what happened to you today on your walk.

And that's Bayes theorem.

Returning, then, to medicine, we need to adjust the example. For rain read "disease"; for forecast read "diagnostic test". Bayes theorem says that the question of interest is "Given that the test is positive, what is the probability that the patient actually has the disease?"

There are two things we wish to avoid. A false negative occurs when a patient with the disease is diagnosed as being healthy. A false positive occurs when a healthy patient is diagnosed as having the disease.

The answer to our question -- "Given that the test is positive, what is the probability that the patient actually has the disease?" -- is the ratio of the number of sick patients who get a positive test result divided by the number of patients (both sick and healthy) who get a positive rest result. (If you like: true positives divided by all positives.)

For this ratio to be high (i.e. for the diagnostic test to be reliable) we need the number of false positives to be very low.

For example if we have 10 true positives and 1 false positive, then the proportion of true positives is 10/11, which is very high. But if we have 10 true positives and 10 false positives, then the proportion is 10/20, which is no better than diagnosis by tossing a coin!

Problems arise when the base rate of the disease amongst people who are tested is low. In a screening programme for a rare disease, even a low rate of false positives will throw up a large number of positive test results, because so many of the people tested will be healthy and a small proportion of a large amount is still a reasonable number of people, all of whom will be wrongly diagnosed. And even if the test is very good at identifying sick people, the actual number of sick people is low (because the disease is rare) so that number of true positives may not be very high. Thus the ratio true positives to all positives may, therefore, not be very high, as in my rain example.

So what did happen?

Not only was the next spin a black, it was black 17 again. In fact, black 17 had come up three times in the last nine spins.

This is a great example of how clumpy randomness is. People tend to associate randomness with evenness. In the very long run they're right. In a very large number of spins, you would expect to see black about half the time, and black 17 about one time in 37. But in the short run, you often get clumpy results such as this.

An exercise I like to use with students is to ask them to write down a sequence of 100 random digits generated from their own heads. Two things typically happen. First, they find it surprisingly hard. Initially they write digits quite quickly, but they soon slow down. This is because they're thinking: they're trying to make the digits look random. But the second thing that happens is that they fail. For example, about one time in ten you would expect the same digit to be repeated. So in a list of 100 random digits, you'd expect about ten repeats. Typically students will generate fewer than this. You'd also expect to see one example (on average) of three of the same digits in a row in a set of 100 random digits. In a class of students, this very rarely happens. Indeed, in a class of 10 students, you'd expect to see one example of four identical digits in a row. This never seems to happen because it just doesn't feel random.

But it is. Randomness is clumpy.

]]>

I was idly browsing Quora, when I came upon the following sequence:

It is highly unusual to have infinity as a term in the middle of a sequence – whatever could it be?

Well, the nth term is the number of **convex regular polytopes in n dimensions**. Ah! That explains it?

It's easiest to start with the second term of the sequence. It's the number of **regular polygons** that you can make. A regular polygon is a plane figure with straight sides all of equal length. The most obvious is perhaps a square. But we additionally require the polygons to be *convex:* this means that the sides cannot turn in on themselves. The figure below on the left is a convex regular pentagon; the figure on the right is also a regular pentagon, but it is not convex.

It should be fairly obvious that you can construct regular polygons with any number of sides:

So the number of convex regular polygons is infinite. And that's the second term of the sequence.

What about the others? Let's look at the third term: 5. This is the number of **regular polyhedra**. That is, the number of three dimensional shapes, where each face is a regular polygon. (Again, we require them to be convex.) The five regular polyhedra are called the **Platonic solids**, and they are illustrated in the photograph above. From left to right: icosahedron, dodecahedron, cube, octahedron and tetrahedron. It is not possible to construct any other regular polyhedra: the angles won't fit together to form a closed solid.

In informal language, then, the sequence is the number of regular shapes in one dimension, two dimensions, three dimensions, and so on. The word "polytope" is the generic term that covers polygons (two dimensions), polyhedra (three dimensions), and all the others in higher dimensions. Perhaps surprisingly, it is the two-dimensional world that offers the greatest variety.

]]>The picture above, taken from a fantastic article about the joys of mathematics in the Times Educational Supplement, is one of many, many proofs of Pythagoras theorem.

I like proofs with no words such as this one. Though you should be careful not to be tricked by "proofs" such as this one.

]]>Online casinos often try to lure in new members by offering a sign-up bonus. One I saw recently offered a free£15 bet. Sounds like a great deal: gamble with someone else's money – if you're lucky you get the winnings, and if you're not you lose nothing.

Being a law graduate, I'm trained in reading the small print. Before you can withdraw your winnings from your free £15 bet you have to gamble **99 times** the £15. In other words £1,485. And you have to gamble that money **within 30 days**.

Let's say you play roulette. Online casinos typically use roulette wheels with a single 0, as well as 1 to 36, half of which are coloured red and the other half are black. The 0 is coloured green. The simplest bet is red or black. If you bet red and red comes up you win double your money (i.e. the money you bet, plus the same amount again). However, because of the 0 (which is neither red nor black) the probability of winning is 18/37, which is a little less than a half.

Now suppose you bet £10 and that you always bet on red. If you played 37 games, you'd expect to win 18 of them and lose 19. So you'd wager £370 and win back £360: a net loss of £10.

To get your winnings from your free £15 bet, you'd have to gamble £1,485. If you were betting on red as above, you can expect to lose slightly over £40. Not such a great deal. Especially when you consider that your free bet may not actually win at all, in which case you'll get nothing back. So the casino can expect to gain £40 from new players' losses, but only pay back £15, and only to one in two of them. The promotion brings in £80 but costs only £15. And, they hope, by the time you've placed 99 bets you're already hooked and will contine playing for a long time, each time losing on average about 3-6% of what you bet depending on the game you play.

You may be wondering who could possibly afford to bet so much money in such a small time. It's a lot easier than you think because you typically gamble the same money over and over again. Suppose you have £10 to play on a slot machine. On some spins you'd win, on some you'd lose. What would you do with your winnings? People generally recycle their winnings, putting them back in the machine. A typical slot machine takes "only" 3% of the money you bet. 3% of £10 is 30p: hardly a big loss. But it's not the £10 that the slot machine takes 3% from. It's the total money you put in the machine, which is like to be far more than the £10 because you'll keep putting the same money in the machine over and over again. This is how it's perfectly possible to come away completely empty-handed.

Incidentally, I've said that the £15 bonus actually costs you £40. Another way of putting this is that the casino gives you back £15 of your expected £40 loss after gambling £1,485. This rate of return of losses is typically followed by "real" casinos in places like Las Vegas. Regularly gamblers can ask to be "rated" by the casino they play in. The casino monitors your gambling and gives you around a third of your expected losses back – not in cash but in freebies: usually food, drink, accommodation in the casino and tickets to shows.

]]>Naturally I was delighted that, once again, the Guardian named Cambridge as the top UK university. But what does this really tell us? Am I right to feel smug because I went to Cambridge myself?

Well, maybe not. I am instinctively opposed to league tables both because I find our obsession with ranking things unhealthy and unhelpful, and because I think it is impossible to do so meaningfully.

Let's look at school league tables. These are based on grades obtained in public examinations. I'm simplifying here, but basically you add up the number of A grades obtained in each school, divide by the number of students in that school and you get an average. Good schools have high averages, bad schools hve low averages. Pretty uncontroversial, surely?

I argue that pretty much every step of the process is flawed. For example, adding up the number of A grades. This is only meaningful if all A grades are the same. Does an A grade in Maths mean the same thing as an A grade in Theatre Studies? (If so, *what* does it mean?)

Let's try an easier question. Does an A grade in Maths mean the same thing as an A grade in Maths? That's a ridiculous question, surely? Well, no. Different schools use different exam boards for their maths exams. Can we be certain that A grades given by different exam boards mean the same thing? (OCR's exams have always struck me as a much harder that Edexcel's, for example.)

Let's narrow it down further. Does an A grade in Edexcel's Maths mean the same thing as an A grade in Edexcel's Maths? Not necessarily. Students can choose which modules they sit. Are the exams on the Statistics modules directly comparable to those on the Mechanics modules? (Edexcel's module in Decision Maths is often seen as markedly easier than the other modules, despite counting equally for the final grade.)

Hmm. OK, then. Does an A grade in Edexcel's Maths mean the same thing as an A grade in Edexcel's Maths where the modules taken are the same? Not if they're not taken at the same time. Can we be certain that the C3 exam and the marks obtained in it by candidates are consistent from one exam sitting to another? (Edexcel's C3 exam in June 2013 was an internet sensation within hours of the end of the exam because it was considered unusually difficult. Edexcel responded by dropping the grade boundaries quite markedly. How precise is that process? How precise could it possibly be?)

Surely I'll concede that if two students sit the same maths modules set by the same exam board at the same time and both get A grades, then those two A grades are equal?

Nope. An A grade requires an average of 80 marks per module. Or more. One candidate could have got an average of 80. The second could have got an average of 90.

Which makes the second one better? Not necessarily. Maybe the second one got very high marks on the easier modules which boosted his average. C1 is the simplest module, but it counts equally. Very high marks in C1 can make up for low marks in, say, C3. Maybe the second student was sitting some of the modules for the second time, having tried them a year earlier and not done so well.

I think it's perfectly possible that a student with a B grade is meaningfully better at maths than a student with an A grade. Yet the A grade student will be off to a top-ranked university, and the B grade student will have to settle for his second choice.

But now I've fallen into the league table trap. *Top-ranked university.* What does that mean? If you can't even compare A grades in the same subject and be sure that you're making a meaningful, consistent judgement, how can you compare entire universities and say that some are "better" than others?

I'll bet there are some lecturers at London Metropolitan (ranked bottom of the Guardian's table) who are better than some lecturers at Cambridge. (Uh oh. *Better.* What does that mean?) Stephen Hawking was a professor at Cambridge: that didn't mean you'd be certain to be taught by him or even that you'd ever see him at all. And just because he's incredibly clever doesn't mean he's an incredible teacher. I know I'm not the only person who gave up on A Brief History of Time well before the final chapter.

Malcolm Gladwell agrees with me. He wrote an excellent piece for the New Yorker on the subject of ranking colleges in the USA.

(The picture above was taken from the roof of the Wren Library at Trinity College, Cambridge, where I got my first degree. Despite having argued that I'm not entitled to feel smug about having been to Cambridge, I'm still a little proud of it, even if I'm not sure why I should be.)

]]>The **BBC reports** that 2014 Oscar winner Dan Piponi, who was part of the team which pioneered simulating smoke and fire in films such as Avatar and Puss In Boots, said: "Nobody told me if I wanted to get an Academy Award, I should study mathematics." Joshua Pines, who worked on the film Coraline, was honoured for developing image-processing mathematics to standardise colour.

How many people do you think need to be in a room such that it is more likely than not that two of them share the same birthday?

The answer may surprise you. It's 23.

I mentioned this to one of my students yesterday and he didn't believe me. OK, I said, what's your favourite football club? **Chelsea**. So we looked at the biographies of the members of the **first team** to see if any share a birthday. Chelsea's website listed 24 members of the first team, though only 23 had their birthdays listed. So there was a 50:50 chance we'd find a match.

And we did! **David Luiz** and **John Mikel Obi** both have their birthday on 22nd April.

Here's the best part: so do I! What are the chances of that?!

Let's start with the Chelsea players. We looked at 23 of them. We needed to find a matching birthday. How many potential matches are there? First on the list is Petr Cech. We can compare his birthday with any of the remaining 22 players. So that's 22 potential matches. Next on the list is Branislav Ivanovic. We've already compared him with Petr Cech, but there are still 21 other players to compare his birthday with. That's a total of 43 comparisons so far. Then there's Ashley Cole. We've already compared him with both Petr Cech and Branislav Ivanovic, but that still leaves 20 other players to compare his birthday with. So now we've looked at 63 different pairs of players.

There's a pattern here. The total number of comparisons we can make amongst the 23 players is 22+21+20+19+...+3+2+1 = 253. That's a surprisingly large number and suddenly it's not looking quite so surprising that we've got a 50:50 chance of finding a match.

For any given pair, the probability they share a birthday is 1/365, ignoring leap-year birthdays. So the probability they don't share a birthday is 1–1/365 = 364/365.

The probability no pair shares a birthday is (364/365)^253, since there are 253 pairs, each of which must not share a birthday.

So the probability that there is (at least one) pair that does share a birthday is 1–(364/365)^253, which is 0.5005 to 4 decimal places. Pretty much bang on 50:50.

The probability we would find two Chelsea first team players sharing a birthday *with me* is 0.5005 x 1/365, which is almost one in a thousand. Which is pretty small!

All this inspired me to dig a little deeper.

First I drew a graph. (You can click on the graph to enlarge it.) This shows the probability of two people having the same birthday on the y-axis and the number of people in the room on the x-axis. The red line shows a probabiliy of 50%, which crosses the graph at 23 people. The blue liine shows a probability of 95%, which crosses the graph at 48. So if you have 48 people in a room it is almost certain there'll be two people sharing a birthday.

I then looked at three people sharing a birthday, and four, and five, and six, and seven, and . . . then my computer could no longer handle the calculations.

I discovered, for example, that if you have 800 friends on Facebook, it is *certain* that there will be (at least one) day in the year on which seven of your friends have a birthday. By "certain" I mean that the probability is so close to 1 that there is only a tiny, tiny chance that you won't have such a day. (It's **0.00002**!)

I produced a **graph** on **Desmos** to illustrate all this. The slider on the left varies the value of k, this is the number of people sharing a birthday: you can vary it from 2 to 7. The horizontal axis is the number of people in the room (or the number of Facebook friends you have) and the vertical axis is the probability that you will have k friends who share the same birthday.

In fact this problem has been exhaustively researched and discussed. It's called the **Birthday Problem**, or sometimes the Birthday Paradox (as the number of people required is much smaller than you might imagine).

**UPDATE**

I tried this with a couple of other students, with equally impressive results. When we went through the **Arsenal first team**, we found a pair *in the first two players* we tried: **Wojciech Szczesny** and **Lukasz Fabianski** were both born on 18th April. With **Manchester United**, it took five players to find a match: **David De Gea** and **Rio Ferdinand** were both born on 7th November.

The Daily Telegraph gives some excellent advice on applying to medical schools.

Getting into medical school** **is hard. I didn’t realise just how hard until I started to research it. According to Ucas figures for 2012 entry, there were 82,489 applications to medical courses for only 7,805 places. This means there were 10.6 applicants for every place.

To give yourself the best possible chance of success – or at least a fighting chance of an interview – your application needs to fulfil the tough academic requirements and have an “X factor” that will catch the eye of the admissions tutor, too.

For prospective medical students like me, it is a daunting prospect. How to succeed? I talked to doctors, medical students and academics involved in the admissions process to find out.

First, the basics: you need top grades – not just at A-level, but also at GCSE. Candidates with A/A* GCSE results in English language, maths and science are preferred, and in reality most successful applicants will boast As and A*s in a wide range of subjects. Nearly all universities ask for chemistry A-level and at least one other science: some insist this should be biology. A third A-level is needed, and it can be any subject (although most medical schools will not accept general studies or critical thinking). Realistically, to gain an offer your predicted grades must be AAA at least. If you do get a conditional offer, your place at medical school will be assured by meeting the required grades: AAA, or even A*AA at some universities.

Most universities ask applicants to take either the BMAT or the UKCAT aptitude tests, which examine GCSE scientific knowledge and aptitude for medicine by assessment of verbal reasoning, data analysis, abstract reasoning, decision-making and judgment in real-life situations.

Students are told that there is a limited amount of work they can do to prepare for the aptitude test, but Joe Hamilton, a third-year medical student, told me otherwise. Hamilton was rejected by all four of his chosen universities the first time round. His below-par UKCAT mark was partly responsible. “Two of the rejections I received were due to the fact that I did not score highly enough in the UKCAT.” So how did he make sure he got a better score the following year? “The second time around I did a two-day course in London and a lot more practice before sitting the test.” He dramatically improved his score.

The course Hamilton took is run by Kaplan, an international exam-preparation organisation, and teaches techniques for answering questions from each section of the test. For instance, careful time-management counts: it is crucial that you attempt all sections, as often the questions that carry more marks are towards the end of the paper. That’s a useful insight, but at £315 the course is not cheap.

“They say you can’t prepare for the UKCAT, only familiarise yourself with the questions,” Hamilton says. “I found that was not the case and the more practice you do, the higher the score you will get. I know a lot of others who are at medical school with me now had exactly the same experience of the UKCAT.”

Universities will also be looking for evidence that you are genuinely interested in medicine and have read widely around the subject, gaining insight into the NHS and health care generally. Dr Lawrence Seymour, a consultant in acute medicine at a teaching hospital, recommends starting as early as GCSE year. “I would advise a would-be doctor to keep a folder and collect anything in the general press or from medical journals such as the BMJ [formerly the British Medical Journal] that relates to medical advances, new treatments – anything that catches their interest.”

Before applying, students should make sure they have a clear idea of what being a doctor is about, says David Bender, an emeritus professor of nutritional biochemistry at University College London, and a former member of the medical admissions team. “Students thinking about applying to medical school should talk to doctors and medical students to find out what the course and the job is really like,” he says. “It is not all the glamour you see on television.”

Nearly all medical schools require applicants to have some sort of health-care-related work experience. I asked Dr Patrick Harkin, the deputy director of medical admissions at the University of Leeds, what counted as relevant experience. “Volunteering in a hospice is work experience, even if it’s not necessarily what you think of first. In fact, anything that has clinical relevance is work experience. Care homes, hospices, pharmacies, all places where something clinical is happening.” You don’t need a long list of placements, Dr Harkin says, as long as it is clear that you have learnt from what you’ve done. “It’s not about what you do; it’s about how much you get out of it. Some people get more out of a week than others get out of a month.”

Having said that, working or volunteering in a clinical setting for a prolonged period of time is valuable. “If you stick at something for six months, that shows dedication and an interest. If you’ve been at 15 different things we might start to wonder about your commitment, or your ability to get on well with other people.”

Work experience can also enhance your vital communication skills. Leo Feinberg, president of the University of Birmingham’s MedSoc and a third-year medical student, volunteered in an acute medical unit, where he learnt what he says is one of medicine’s most important lessons: that “Patients want to talk. They may be nervous, and they need someone to offload to.”

Only three medical schools, Belfast’s Queen’s, Edinburgh, and Southampton, do not interview prospective medical students. But certain medical schools place more emphasis on personal statements than others and information about this can be found on their websites. (Many universities provide a guide to writing the personal statement, as does UCAS.)

Dr Harkin stresses that a personal statement** **must concentrate on the individual’s unique experiences relevant to their choice of career. “Your personal statement is personal. It is about you. We are not after great prose. This is not a creative writing course.” Hamilton agrees: “Anything that I thought was relevant to my application, that I had gained something from, I put into my personal statement.”

At the interview, tutors are looking for commitment and enthusiasm. They also assess aptitude, empathy, communication skills and social awareness. Preparation is vital, says Dr Seymour. “Interview practice is really, really important. Most candidates are stumped or struggle to sound sincere when you ask them 'So why do you want to study medicine?’ My advice is: practise.”

A realistic understanding of the highs and lows of being a doctor is required. Prof Bender has a way of investigating this. “One of the questions I often ask applicants is: 'Has anyone tried to persuade you that medicine is an awful career?’”

Given the competition, it is inevitable that even some of the best candidates will be turned down. But this should not be a deterrent. Hamilton made good use of his enforced gap year by working as a health care assistant at a local hospital, which he believes boosted his application the second time around. “There are a lot of people who are second-time applicants, possibly as many as a third or half of the year group. I have not met anyone who has regretted having a year out – but I have met people who wish that they had had the opportunity.” The greatest benefit of his year out, Hamilton says, was the chance “to experience health care from the nurses’ perspective. From their point of view, doctors who started out as health care assistants make the best doctors!”

After speaking to Joe Hamilton – and so many other helpful people – I am encouraged, though the anxiety hasn’t completely left me. At least I know what I need to do to give myself a chance of being accepted — and maybe one day I will be able to look a patient in the eye and say: “Hello, I’m Dr Ford. How can I help you?”.

The Economist recently published a story about a competition that tested Stanley Milgram's famous "six degrees of separation" claim.

DARPA, the research arm of the Department of Defense in the US, staged the Red Balloon Challenge in 2009. Competitors had to locate ten red weather balloons that had been tethered at random locations across the US.

The intention was not that one person drive around the country with a pair of binoculars. Rather, I might ask all my friends on Facebook to look out for a red balloon and tell me if they saw one. They might then ask all their friends, and so on.

The winning team from MIT found all of the balloons in just nine hours using this type of strategy. But to encourage participation they offered $2,000 to the first person to send them the co-ordinates of a balloon. On its own this may not have been very efficient. So, crucially, they also offered $1,000 to whomever recruited *that* person to the challenge, and $500 to whomever recruited *that* person to the challenge, and so on.

One interesting question mathematically is how much money did the MIT team stand to lose? The Red Balloon Challenge offered a prize of $40,000.

In principle, the sender of a winning set of co-ordinates might have been been at the top of a long line of recruiters. Doesn't this mean that the MIT team risked an enormous payout?

Well, no. Consider a recruitment chain of seven people. The total payout would be:

$2,000+$1,000+$500+$250+$125+$62.50+$31.25 = $3,968.75

The seventh payment of $31.25 is pretty small. If there were more people in the chain, their payments would be even smaller. Nonetheless, lots of small amounts can quickly add up to a large amount.

Suppose there were 17 people in the chain. The total payout would be $3,999.97 (to the nearest cent). The seventeenth person would have got 3¢.

Even so, there could have been many more people in the chain, and might not the total have slowly grown to an unaffordable amount?

Suppose the total amount payable is T and that there are infinitely many people in the chain. Then,

T = 2000 + 1000 + 500 + 250 + ...

Now multiply both sides of this by 2:

2T = 4000 + 2000 + 1000 + 500 + ...

Finally subtract the first equation from the second:

2T – T = 4000 + 2000 + 1000 + 500 + ... – (2000 + 1000 + 500 + 250 + ...)

Which gives:

T=4000

So even if there had been infinitely many people in the chain, the total payout would have been $4,000. Since there were ten balloons that gives a grand total of $40,000 which was the value of the prize. MIT were certain to make at least a little profit, provided they actually won the prize. Since the kudos of winning was worth more than the prize, their investment was well worth the risk.

Series such as this one are called **geometric series**. One of the earliest examples of them was Zeno’s dichotomy paradox. For me to walk 4000 metres, I first have to walk 2000m, i.e. half the total distance. I then have to walk 1000m, i.e. half the remaining distance. Then 500m. Then 250m. And so on. No matter how far I've travelled there's always half the remaining distance left to go. So I have infinitely many stages to complete and will therefore never get to the end of them.

The flaw in the argument is that last sentence. It assumes that the infinitely many stages of the journey will take infinitely long to get through. But they won't.

Suppose I walk at 1m/s. Then the first stage will take me 2000s. The second will take 1000s. The third 500s, and so on. So the total time taken will be T = 2000 + 1000 + 500 + ... We now know that this adds up to 4000. So I can finish the journey in a finite amount of time. (Indeed the 4000 seconds you would expect me to need to walk 4000m at 1m/s.)

]]>