Category → math

[CIMC 2015 Part 2] Journey of the Blue-White Slippers

(Nontopical life update: Current 18.06 homework status: 34% (mildly screwed, probably won’t finish before I leave my cozy home for the U.S. and I usually struggle to get into the mood for homework while traveling, but I guess I’ll have to)) [18.06 status panel: 34%] (I’ve been spending most of my uptime doing said homework and running errands, and my downtime catching up on Last Week Tonight with John Oliver while farming the Flight Rising Coliseum. And, okay, making the above status panel. Live version here courtesy of Dropbox’s Public folder. No regrets.)

Day 3 (Excursions)

Morning routine snipped. We come to the middle school again to eat breakfast and gather; the contestants will be taking their tests here (accompanied by one bottle of “Buff” energy drink each) while the rest of us will be going on an excursion. Before this happens, though, two Taiwanese contestants ask me and Hsin-Po some math problems. There’s a geometry problem, which I fail to solve:

(paraphrased) In triangle △ABC, ∠A is 40° and ∠B is 60°. The angle bisector of ∠A meets BC at D; E is on AB such that ∠ADE is 30°. Find ∠DEC.

Hsin-Po figures out that, once you guess (ROT13) gur bgure boivbhf privna vf nyfb na natyr ovfrpgbe naq gurl vagrefrpg ng gur vapragre, lbh pna cebir vg ol pbafgehpgvat gur vapragre naq fubjvat sebz gur tvira natyr gung gurl vaqrrq pbvapvqr.1 Then, there’s a combinatorics problem in a book with a solution that they’re not sure about:

(paraphrased) 15 rays starting at the same point are drawn. What is the maximum number of pairs of rays that form obtuse angles?

This happens really close to the test starts and although I have this feeling it’s isomorphic to a notable combinatorial problem, I don’t manage to articulate the isomorphism until it’s too late and they have to go. Indeed, this is more or less equivalent to (ROT13) Ghena: gur tencu unf ab sbhe-pyvdhr naq n pbzcyrgr guerr-cnegvgr tencu vf pbafgehpgvoyr. After thinking though the solution on their book, though, I realize I’ve never seen this proof of said theorem before! (But later I realize it’s actually the just very first proof that Proofs from the BOOK offers. I probably skipped it because it involved induction as well as some algebraic manipulations that looked much less intuitive and natural than they really were, so it didn’t look as cool as the later proofs. Oooooops.)

I suspect I wouldn’t do too well if I had to participate in that contest right then. But anyway, excursion.

After a long bus ride, we arrive at our first destination, Jingyuetan (淨月潭 lit. Clear Moon Lake2), allegedly the sister lake to Taiwan’s own famous[citation needed] Sun Moon Lake. We tour the place on a wall-less car and look at the lake and lots of trees. During a stop, I take some pictures of sunflowers and bees, as well as a stand selling Taiwanese sausages.

[Lake and ferry]

[Bee and sunflower]

[Sausages advertised as from Taiwan!]

The car blares weird music during the tour, such as a version of Für Elise with all the accents on different beats and a disjointed remix of the viral Chinese song 小蘋果 (Little Apple)3 with two other Chinese songs, connected with mumbling English rap segues. We also eat boxed lunches here while sitting on tiny, cramped foam mattresses on the dirt floor.

Our next stop is a museum, where there are lots of ancient historical artifacts I’m not very interested in. I find a collection of certificates involving or quoting Chairman Mao more intriguing: [Certificates with quotes and/or pictures of Chairman Mao]



I always tell myself, okay, I will actually just draw something facetiously and get it over with, nobody comes to this blog to admire my GIMP mouse doodles, but then perfectionist tendencies kick in and I get carried away and it ends up taking more than an hour or so.



original sillier post

Note on notation: I’m going to use \(\text{Stab}(x)\) instead of \(G_x\) for the stabilizer subgroup and \(\text{Cl}(x)\) instead of \(^Gx\) for the conjugacy classes. For the orbit of \(x\) I’ll stick with the norm and use \(Gx\), although it’s only used in confusing summation notation that I’ll explain with words too.

We keep using this silly counting argument which I thought was something like Burnside’s lemma but actually is a lot simpler, just partitioning the set into orbits and slapping the orbit-stabilizer theorem on.

If \(G\) is the group and \(S\) is the set then


original sillier post

Note on notation: to be maximally clear, I have bolded all my vectors and put tiny arrows on them. Normal letters are usually reals, uppercase letters are usually bigger matrices. Also, \(\cdot^T\) denotes the transpose of a matrix.

Let \(\vec{\textbf{v}}_1, \ldots, \vec{\textbf{v}}_m\) be elements of \(\mathbb{R}^n\) where \(m \leq n\), i.e. column vectors with \(n\) real elements. Let \(V = [\vec{\textbf{v}}_1, \ldots, \vec{\textbf{v}}_m]\). This means pasting the column vectors together to make an \(n \times m\) matrix (\(n\) rows \(m\) columns).

Consider the thing \(\vec{\textbf{v}}_1 \wedge \vec{\textbf{v}}_2 \wedge \cdots \wedge \vec{\textbf{v}}_m\), which can be visualized as the hyperparallelogram \(\left\{\sum_{i=1}^{m} t_i\vec{\textbf{v}}_i \,\middle\vert\, t_i \in [0,1], i = 1, 2, \ldots, m \right\}\) but is apparently a different thing in a different vector space of things. We wonder how to compute the hyperarea of this hyperparallelogram.

Extreme Value Theorem

original sillier post

The “Extreme Value Theorem”, according to my old calculus textbook (Larson, Hostetler, Edwards, 8th ed.), is “A continuous function defined on a closed interval attains a minimum and a maximum on the interval.” The calculus textbook continues,

Although [it] is intuitively plausible, a proof of this theorem is not within the scope of this text.

Of course Rudin proves this, but coming from an unrigorous calculus background, the required deductions span three chapters and are very hard to motivate. That’s probably because it proves things for extremely general objects.

In particular, I have no idea why the definition of “compact” is what it is: “a set for which every open cover has a finite subcovering”. Therefore, here’s a less general but more motivated proof, from the grounds-up.


Real numbers are hard to define. We only need to know that we can add, subtract, multiply, divide, and compare them, i.e. they’re an ordered field, and that they have the least-upper-bound property. That is, if we have a set of real numbers that are bounded above, then the set has a least upper bound, a bound that “perfectly fits” the set. Precisely, the bound must be such that every element of the set is less than or equal to it, and no smaller value satisfies the same property.


Continuing the porting of stuff from betaveros.stash, and adding more stuff. Mnemonic Here’s my mnemonic table for digits, inspired by an old Martin Gardner column. I wrote from memory the first 132 digits of 2012! correctly at IMO 2012 with this table. I had remembered more, but unfortunately, if I recall correctly, I confused myself over whether I had encoded a 5 or a 2 by the S of “nose”, because this is supposed to be more of a phonetic code than a spelling one — otherwise double letters would be confusing and lots of randomly appearing digraphs would be wasted, because English is weird.

Haskell and Primes

“I have been told that any encryption becomes safer if the underlying algorithm is maximally obscured, what is most conveniently done by coding it in Haskell.” – rankk

Functional programming is terribly addicting! Partly I think the completely different way of thinking makes it feel like learning programming, and falling in love with it, all over again. Partly there’s this evil sense of satisfaction from using $s (and later <$>s and =<<s and &&&s) to improve readability for initiated Haskellers and worsen it for everybody else. Partly it’s because Learn You a Haskell for Great Good! is such a fun read — there are too many funny bits to list but my favorite so far is when the author analyzes the first verse of Avril Lavigne’s Girlfriend.

Although I think my code in Haskell tends to be more readable than in other languages, code obfuscation in Haskell is almost natural: all you have to do is refactor the wrong function to be “pointfree”, which means that even though it’s a function that takes arguments, you define it without parameters by manipulating and joining a bunch of other functions. Example (plus a few other tiny obfuscations):

isPrime = liftA2 (&&) (liftA2 ($) (all . ((.) (0 /=)) . rem) (flip
takeWhile [2..] . (flip (.) $ liftA2 (*) id id) . (>=))) ((<) 1)

QQ wordpress why no Haskell highlighting (Editor’s note from 2017: The migration should highlight this now!)

Also, for some reason, you can do this in Haskell:

ghci> let 2 + 2 = 5 in 2 + 2

(via Haskell for the Evil Genius)

Okay, but seriously now. I wrote this about my journey to learn functional programming in the programming babble post half a year ago:

The main obstacle I have is that it’s hard to optimize or get asymptotics when computation is expensive (a big problem if you’re trying to learn through Project Euler problems, particularly ones with lots of primes).

Matrix Intuition

Stopped by a friend’s house a few days ago to do homework, which somehow devolved into me analyzing what programming language I should try to learn next in a corner, which is completely irrelevant to the rest of this post. Oops.

Anyway, in normal-math-curriculum-land, my classmates are now learning about matrices. How to add them, how to multiply them, how to calculate the determinant and stuff. Being a nice person, and feeling somewhat guilty for my grade stability despite the number of study hours I siphoned off to puzzles and the like, I was eager to help confront the monster. Said classmate basically asked me what they were for.

Well, what a hard question. But of course given the curriculum it’s the only interesting problem I think could be asked.

When I was hurrying through the high-school curriculum I remember having to learn the same thing and not having any idea what the heck was happening. Matrices appeared in that section as a messy, burdensome way to solve equations and never again, at least not in an interesting enough way to make me remember. I don’t have my precalc textbook, but a supplementary precalc book completely confirms my impressions and “matrix” doesn’t even appear in my calculus textbook index. They virtually failed to show up in olympiad training too. I learned that Po-Shen Loh knew how to kill a bunch of combinatorics problems with them (PDF), but not in the slightest how to do that myself.

Somewhere else, during what I’m guessing was random independent exploration, I happened upon the signed-permutation-rule (a.k.a. Leibniz formula) for evaluating determinants, which made a lot more sense for me and looked more beautiful and symmetric

\[\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n A_{i,\sigma_i}\]

[IMO 2012 Part 6] Mostly Not About Excursions

Yes. I know it’s been more than a month. Blogging motivation decreases, but the responsibility of that stay tuned doesn’t go away.

It’s okay. It’s all worth it because the stuff in the games room is absolutely ridiculous. Warning: huge post.

[IMO 2012 Problem 4 on a cake.
Our old friend, the monster of a functional equation, in edible form. In the games room. Did I mention abso-zarking-lutely ridiculous?

[IMO 2012 Part 5] Unlucky Fours

[edit: okay guys I’m surprised at many people come here with search queries looking for solutions. If you want IMO solutions, the corresponding AoPS forum invariably has many of them. This is probably late-ish, but just in case.]

Day 2 of the contest.

Did you know that in Chinese [or Mandarin, whatever] “four” is unlucky because it’s a homophone for “death”, and hospitals tend to skip it in floors or ward numbers?

Did you know that there was going to be an anecdote involving the seventh Artemis Fowl book but I couldn’t make it work so instead you have a weird and utterly disconnected metareference to something deleted?

I don’t know, it sounded cool at the time.

Problem 4. Find all functions f: Z → Z such that, for all integers a,b,c that satisfy a+b+c = 0, the following equality holds: \[f(a)^2+f(b)^2+f(c)^2=2f(a)f(b)+2f(b)f(c)+2f(c)f(a).\] (Here Z denotes the set of integers.)

An innocent-looking functional equation, but once you start trying it you discover that there’s quite some depth to it. Random guessing can yield that \(f(x) = x^2\) is a solution, so I proved inductively that after dividing out a constant f(1) then the remaining part of f is a perfect square. Letting \(f(x) = f(1)g(x)^2\) with g(x) a nonnegative integer and factorizing the original equation, I got an auxiliary functional equation equivalent to the original equation.