tag:blogger.com,1999:blog-24005138593057807102022-05-08T09:53:25.378-07:00Mathematical Food For ThoughtJeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.comBlogger318125tag:blogger.com,1999:blog-2400513859305780710.post-46235557230237761092007-07-07T09:56:00.001-07:002013-10-06T19:14:35.253-07:00What's Your Function? Topic: Algebra. Level: AMC/AIME.<strong>Problem</strong>: Given two positive reals $\alpha$ and $\beta$, show that there is a continuous function $f$ that satisfies $f(x) = f(x+\alpha)+f(x+\beta)$.<br/><br/><strong>Solution</strong>: There are several special cases that are interesting to look at before we make a guess as to what type of function $f$ will be. First we consider the case $\alpha = \beta = 1$. This immediately gives<br/><br/>$f(x) = 2f(x+1)$.<br/><br/>It should not be too difficult to guess that $f(x) = 2^{-x}$ is a solution to this, as well as any constant multiple of it. Now try $\alpha = -1$ and $\beta = -2$, resulting in<br/><br/>$f(x) = f(x-1)+f(x-2)$.<br/><br/>Looks a lot like Fibonacci, right? In fact, one possible function is just $f(x) = \phi^x$. That's pretty convenient.<br/><br/>Notice how both of these illuminating examples are exponential functions, which leads us to guess that our function will be exponential as well. So, following this track, we set<br/><br/>$f(x) = a^x$<br/><br/>so we simply need to solve<br/><br/>$a^x = a^{x+\alpha}+a^{x+\beta}$<br/><br/>$a^x = a^x\left(a^{\alpha}+a^{\beta}\right)$.<br/><br/>Unfortunately, the equation $a^{\alpha}+a^{\beta} = 1$ does not always have a solution (take $\alpha = 1$ and $\beta = -1$ for example). But that's ok and I'll worry about it some other time. In any case we have found a function for whenever the equation $a^{\alpha}+a^{\beta} = 1$ has a solution in the reals.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com6tag:blogger.com,1999:blog-2400513859305780710.post-57853613227096659172007-06-29T16:12:00.001-07:002013-10-06T19:14:35.269-07:00Addition At Its Finest. Topic: Calculus/S&S.<strong>Problem</strong>: Evaluate $\displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n(n+1)}$ where $x$ is a real number with $|x| < 1$.<br/><br/>Solution: Looking at that all too common denominator, we do a partial fraction decomposition in hopes of telescoping series. The summation becomes<br/><br/>$\displaystyle \sum_{n=1}^{\infty} \left(\frac{x^n}{n}-\frac{x^n}{n+1}\right)$.<br/><br/>Common Taylor series knowledge tells us that<br/><br/>$\displaystyle \ln{(1-x)} = -\left(x+\frac{x^2}{2}+\frac{x^3}{3}+\cdots\right) = -\sum_{n=1}^{\infty} \frac{x^n}{n}$,<br/><br/>which convenient fits the first part of the summation. As for the second part, we get<br/><br/>$\displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n+1} = \frac{1}{x} \sum_{n=1}^{\infty} \frac{x^{n+1}}{n+1} = \frac{-\ln{(1-x)}-x}{x}$<br/><br/>from the same Taylor series. Combining the results, our answer is then<br/><br/>$\displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n(n+1)} = 1-\ln{(1-x)}+\frac{\ln{(1-x)}}{x}$.<br/><br/>QED.<br/><br/>--------------------<br/><br/>Comment: Even though the trick at the beginning didn't actually get much to telescope, the idea certainly made it easier to recognize the Taylor series. Algebraic manipulations are nifty to carry around and can be applied in problems wherever you go.<br/><br/>--------------------<br/><br/>Practice Problem: Show that $\displaystyle \int_0^{\frac{\pi}{2}} \ln{(\tan{x})} = 0$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com4tag:blogger.com,1999:blog-2400513859305780710.post-5182831663066826252007-06-18T13:37:00.001-07:002013-10-06T19:14:35.258-07:00The Smaller The Better. Topic: Calculus.<strong>Problem</strong>: Given a <em>complicated</em> function $f: \mathbb{R}^n \rightarrow \mathbb{R}$, find an approximate local minimum.<br/><br/><strong>Solution</strong>: The adjective <em>complicated</em> is only placed so that we assume there is no easy way to solve $\bigtriangledown f = 0$ to immediately give the solution. We seek an algorithm that will lead us to a local minimum (hopefully a global minimum as well).<br/><br/>We start at an arbitrary point $X_0 = (x_1, x_2, \ldots, x_n)$. Consider the following process (for $k = 0, 1, 2, \ldots$), known as gradient descent:<br/><br/>1. Calculate (approximately) $\bigtriangledown f(X_k)$.<br/><br/>2. Set $X_{k+1} = X_k - \gamma_k \bigtriangledown f(X_k)$, where $\gamma_k$ is a constant that can be determined by a <a href="http://en.wikipedia.org/wiki/Line_search">linesearch</a>.<br/><br/>It is well-known that the direction of the gradient is the direction of maximum increase and the direction opposite the gradient is the direction of maximum decrease. Hence this algorithm is based on the idea that we always move in the direction that will decrease $f$ the most. Sounds pretty good, right? Well, unfortunately gradient descent converges very slowly so it is only really useful for smaller optimization problems. Fortunately, there exist other algorithms but obviously they are more complex, such as the <a href="http://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient">nonlinear conjugate gradient method</a> or <a href="http://en.wikipedia.org/wiki/Newton%27s_method_in_optimization">Newton's method</a>, the latter of which involves the computation of the inverse of the Hessian matrix, which is a pain.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com0tag:blogger.com,1999:blog-2400513859305780710.post-81553689135595107402007-06-06T11:15:00.001-07:002013-10-06T19:14:35.289-07:00Colorful! Topic: Calculus.<strong>Theorem</strong>: (Green's Theorem) Let $R$ be a simply connected plane region whose boundary is a simple, closed, piecewise smooth curve $C$ oriented counterclockwise. If $f(x, y)$ and $g(x, y)$ are continuous and have continuous first partial derivatives on some open set containing $R$, then<br/><br/>$\displaystyle \oint_C f(x, y) dx + g(x, y) dy = \int_R \int \left(\frac{\partial g}{\partial x}-\frac{\partial f}{\partial y}\right) dA$.<br/><br/>--------------------<br/><br/><strong>Problem</strong>: Evaluate $\displaystyle \oint_C x^2y dx + (y+xy^2) dy$, where $C$ is the boundary of the region enclosed by $y = x^2$ and $x = y^2$.<br/><br/><strong>Solution</strong>: First, verify that this region satisfies all of the requirements for Green's Theorem - indeed, it does. So we may apply the theorem with $f(x, y) = x^2y$ and $g(x, y) = y+xy^2$. From these, we have $\frac{\partial g}{\partial x} = y^2$ and $\frac{\partial f}{\partial y} = x^2$. Then we obtain<br/><br/>$\displaystyle \oint_C x^2y dx + (y+xy^2) dy = \int_R \int (y^2-x^2) dA$.<br/>But clearly this integral over the region $R$ can be represented as $\displaystyle \int_0^1 \int_{x^2}^{\sqrt{x}} (y^2-x^2) dy dx$, so it remains a matter of calculation to get the answer. First, we evaluate the inner integral to get<br/><br/>$\displaystyle \int_0^1 \int_{x^2}^{\sqrt{x}} (y^2-x^2) dy dx = \int_0^1 \left[\frac{y^3}{3}-x^2y\right]_{x^2}^{\sqrt{x}} dx = \int_0^1 \left(\frac{x^{3/2}}{3}-x^{5/2}-\frac{x^6}{3}+x^4\right)dx$.<br/><br/>Then finally we have<br/><br/>$\displaystyle \int_0^1 \left(\frac{x^{3/2}}{3}-x^{5/2}-\frac{x^6}{3}+x^4\right)dx = \left[\frac{2x^{5/2}}{15}-\frac{2x^{7/2}}{7}-\frac{x^7}{21}+\frac{x^5}{5}\right]_0^1 = \frac{2}{15}-\frac{2}{7}-\frac{1}{21}+\frac{1}{5} = 0$.<br/><br/>QED.<br/><br/>--------------------<br/><br/>Comment: To me, Green's Theorem is a very interesting result. It's not at all obvious that a line integral along the boundary of a region is equivalent to an integral of some partial derivatives in the region itself. A simplified proof of the result can be obtained by proving that<br/><br/>$\displaystyle \oint_C f(x, y) dx = -\int_R \int \frac{\partial f}{\partial y} dA$ and $\displaystyle \oint_C g(x, y) dy = \int_R \int \frac{\partial g}{\partial x} dA$.<br/><br/>--------------------<br/><br/>Practice Problem: Let $R$ be a plane region with area $A$ whose boundary is a piecewise smooth simple closed curve $C$. Show that the centroid $(\overline{x}, \overline{y})$ of $R$ is given by<br/><br/>$\displaystyle \overline{x} = \frac{1}{2A} \oint_C x^2 dy$ and $\displaystyle \overline{y} = -\frac{1}{2A} \oint_C y^2 dx$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com0tag:blogger.com,1999:blog-2400513859305780710.post-79684954754698787602007-06-04T11:36:00.001-07:002013-10-06T19:14:35.286-07:00More Integrals... *whine*. Topic: Calculus.<strong>Definition</strong>: (Jacobian) If $T$ is the transformation from the $uv$-plane to the $xy$-plane defined by the equations $x = x(u, v)$ and $y = y(u, v)$, then the Jacobian of $T$ is denoted by $J(u, v)$ or by $\partial(x, y)/\partial(u, v)$ and is defined by<br/><br/>$J(u, v) = \frac{\partial(x, y)}{\partial(u, v)} = \frac{\partial x}{\partial u} \cdot \frac{\partial y}{\partial v} - \frac{\partial y}{\partial u} \cdot \frac{\partial x}{\partial v}$,<br/><br/>i.e. the determinant of the matrix of the partial derivatives (also known as the Jacobian matrix). Naturally, this can be generalized to more variables.<br/><br/>--------------------<br/><br/><strong>Theorem</strong>: If the transformation $x = x(u, v)$, $y = y(u, v)$ maps the region $S$ in the $uv$-plane into the region $R$ in the $xy$-plane, and if the Jacobian $\partial(x, y)/\partial(u, v)$ is nonzero and does not change sign on $S$, then (with appropriate restrictions on the transformation and the regions) it follows that<br/><br/>$\displaystyle \int_R \int f(x, y) dA_{xy} = \int_S \int f(x(u, v), y(u, v)) \left|\frac{\partial(x, y)}{\partial(u, v)} \right| dA_{uv}$.<br/><br/>--------------------<br/><br/><strong>Problem</strong>: Evaluate $\displaystyle \int_R \int e^{(y-x)/(y+x)} dA$, where $R$ is the region in the first quadrant enclosed by the trapezoid with vertices $(0, 1); (1, 0); (0, 4); (4, 0)$.<br/><br/><strong>Solution</strong>: The bounding lines can be written as $x = 0$, $y = 0$, $y = -x+1$, and $y = -x+4$. Now consider the transformation $u = y+x$ and $v = y-x$. In the $uv$-plane, the bounding lines of the new region $S$ can now be written as $u = 1$, $u = 4$, $v = u$, and $v = -u$.<br/><br/>We can write $x$ and $y$ as functions of $u$ and $v$: simply $x = \frac{u-v}{2}$ and $y = \frac{u+v}{2}$. So the Jacobian $\displaystyle \frac{\partial(x, y)}{\partial(u, v)} = \frac{\partial x}{\partial u} \cdot \frac{\partial y}{\partial v} - \frac{\partial y}{\partial u} \cdot \frac{\partial x}{\partial v} = \frac{1}{2} \cdot \frac{1}{2} - \frac{1}{2} \cdot \left(-\frac{1}{2} \right) = \frac{1}{2}$.<br/><br/>Then our original integral becomes $\displaystyle \int_R \int e^{(y-x)/(y+x)} dA = \frac{1}{2} \int_S \int e^{v/u} dA$. And this is equivalent to<br/><br/>$\displaystyle \frac{1}{2} \int_S \int e^{v/u} dA = \frac{1}{2} \int_1^4 \int_{-u}^u e^{v/u} dv du = \frac{1}{2} \int_1^4 \big[ u e^{v/u} \big]_{v=-u}^u du = \frac{1}{2} \int_1^4 u\left(e-\frac{1}{e}\right) du = \frac{15}{4}\left(e-\frac{1}{e}\right)$.<br/><br/>QED.<br/><br/>--------------------<br/><br/>Comment: Note that the above theorem is probably very important in multivariable calculus, as it is the equivalent to $u$-substitution in one variable, which we all know is the ultimate integration technique. It functions in the same way, giving you a lot more flexibility on the function you are integrating and the region you are integrating on.<br/><br/>--------------------<br/><br/>Practice Problem: Evaluate $\displaystyle \int_R \int (x^2-y^2) dA$, where $R$ is the rectangular region enclosed by the lines $y = -x$, $y = 1-x$, $y = x$, $y = x+2$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com0tag:blogger.com,1999:blog-2400513859305780710.post-74867130811138125482007-05-28T04:34:00.001-07:002013-10-06T19:14:35.282-07:00One By One, We're Making It Fun. Topic: Calculus/S&S.<strong>Theorem</strong>: (Stolz-Cesaro) Let $\{a_n\}$ and $\{b_n\}$ be sequences of real numbers such that $\{b_n\}$ is positive, strictly increasing, and unbounded. If the limit<br/><br/>$\displaystyle \lim_{n \rightarrow \infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n} = L$<br/><br/>exists, then the following limit also exists and we have<br/><br/>$\displaystyle \lim_{n \rightarrow \infty} \frac{a_n}{b_n} = L$.<br/><br/>--------------------<br/><br/><strong>Theorem</strong>: (Summation by Parts) If $\{f_k\}$ and $\{g_k\}$ are two sequences, then<br/><br/>$\displaystyle \sum_{k=m}^n f_k(g_{k+1}-g_k) = [f_{n+1}g_{n+1}-f_mg_m]-\sum_{k=m}^n g_{k+1}(f_{k+1}-f_k)$.<br/><br/>--------------------<br/><br/><strong>Problem</strong>: Let $\{a_n\}$ be a sequence of real numbers such that $\displaystyle \sum_{k=0}^{\infty} a_k$ converges. Show that $\displaystyle \lim_{n \rightarrow \infty} \frac{1}{n+1} \sum_{k=0}^n k \cdot a_k = 0$.<br/><br/><strong>Solution</strong>: Define the sequence $\{b_n\}$ by $\displaystyle b_n = \sum_{k=0}^n a_k$ and let $\displaystyle \lim_{n \rightarrow \infty} b_n = L$. Then, by summation by parts with $\{f_n\} = \{n\}$ and $\{g_n\} = \{b_n\}$, we have<br/><br/>$\displaystyle \sum_{k=0}^n k \cdot a_k = \sum_{k=0}^n k \cdot (b_{k+1}-b_k) = (n+1)b_{n+1}-\sum_{k=0}^n b_{k+1}$.<br/><br/>The summation we wish to take the limit of is then<br/><br/>$\displaystyle \frac{1}{n+1} \sum_{k=0}^n k \cdot a_k = b_{n+1}-\frac{1}{n+1} \sum_{k=0}^n b_{k+1}$.<br/><br/>But since, by Stolz-Cesaro,<br/><br/>$\displaystyle \lim_{n \rightarrow \infty} \frac{1}{n+1} \sum_{k=0}^n b_{k+1} = \lim_{n \rightarrow \infty} b_{n+1} = L$,<br/><br/>we obtain<br/><br/>$\displaystyle \lim_{n \rightarrow \infty} \frac{1}{n+1} \sum_{k=0}^n k \cdot a_k = \lim_{n \rightarrow \infty} \left(b_n-\frac{1}{n+1} \sum_{k=0}^n b_{k+1}\right) = L-L = 0$.<br/><br/>QED.<br/><br/>--------------------<br/><br/>Comment: Summation by parts is a very useful technique to change sums around so that they are easier to evaluate. If you hadn't noticed, it is the discrete analogue of integration by parts and is in fact very similar. Stolz-Cesaro is powerful as well and seems like a discrete analogue to L'Hopital (but I'm not sure about this one). Applying well-known calculus ideas to discrete things can often turn into neat results.<br/><br/>--------------------<br/><br/>Practice Problem: If $\{a_n\}$ is a decreasing sequence such that $\displaystyle \lim_{n \rightarrow \infty} a_n = 0$, show that<br/><br/>$\displaystyle \sum_{k=1}^{\infty} a_k \cdot \sin{(kx)}$<br/><br/>converges for all $x$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com0tag:blogger.com,1999:blog-2400513859305780710.post-80055159177842141832007-05-12T11:55:00.001-07:002013-10-06T19:14:35.287-07:00Bigger Means Better. Topic: Algebra/Inequalities/Sets. Level: Olympiad.<strong>Definition</strong>: A set $S$ is said to be <em>convex</em> if, for any $X, Y \in S$ and $\lambda \in [0,1]$, we have $\lambda X+(1-\lambda)Y \in S$.<br/><br/>--------------------<br/><br/><strong>Definition</strong>: Let $f: D \rightarrow \mathbb{R}$ be a real-valued function defined on a convex set $D$. We say that $f$ is <em>convex</em> if, for all $X, Y \in D$ and $\lambda \in [0, 1]$, we have<br/><br/>$\lambda f(X) + (1-\lambda) f(Y) \ge f(\lambda X+(1-\lambda)Y)$.<br/><br/>--------------------<br/><br/><strong>Theorem</strong>: (Jensen's Inequality) Let $f$ be a real-valued, convex function defined on a convex set $D$. Given $x_1, x_2, \ldots, x_n \in D$ and nonnegative reals $w_1, w_2, \ldots, w_n$ such that $w_1+w_2+\cdots+w_n = 1$, we have<br/><br/>$w_1f(x_1)+w_2f(x_2)+\cdots+w_nf(x_n) \ge f(w_1x_1+w_2x_2+\cdots+w_nx_n)$<br/><br/>or, more concisely,<br/><br/>$\displaystyle \sum_{i=1}^n w_if(x_i) \ge f\left(\sum_{i=1}^n w_ix_i\right)$.<br/><br/><strong>Proof</strong>: We proceed by induction on $n$.<br/><br/>Base Case - $n = 1$, we have $f(x_1) \ge f(x_1)$ which is trivially true. $n = 2$, we have $\lambda_1 f(x_1)+\lambda_2 f(x_2) \ge f(\lambda_1 x_1+\lambda_2 x_2)$ with $\lambda_1+\lambda_2 = 1$ which is true by the definition of convexity.<br/><br/>Induction Step - Suppose $\displaystyle \sum_{i=1}^k w_if(x_i) \ge f\left(\sum_{i=1}^k w_ix_i\right)$. We wish to show<br/><br/>$\displaystyle \sum_{i=1}^{k+1} u_if(x_i) \ge f\left(\sum_{i=1}^{k+1} u_ix_i\right)$<br/><br/>for nonnegative reals $u_1, u_2, \ldots, u_{k+1}$ that sum to $1$. Well,<br/><br/>$\displaystyle f\left(\sum_{i=1}^{k+1} u_ix_i\right) \le u_{k+1}f(x_{k+1})+(1-u_{k+1})f\left(\frac{1}{1-u_{k+1}} \sum_{i=1}^k u_ix_i\right)$<br/><br/>by the definition of convexity. But since $\displaystyle \frac{1}{1-u_{k+1}} \sum_{i=1}^k u_i = \sum_{i=1}^k \frac{u_i}{1-u_{k+1}} = 1$, by our inductive hypothesis (the $k$ term case) we must have<br/><br/>$\displaystyle f\left(\frac{1}{1-u_{k+1}} \sum_{i=1}^k u_ix_i\right) \le \sum_{i=1}^k \frac{u_i}{1-u_{k+1}} f(x_i)$.<br/><br/>Combining this into the above inequality, we obtain<br/><br/>$\displaystyle f\left(\sum_{i=1}^{k+1} u_ix_i\right) \le u_{k+1}f(x_{k+1})+(1-u_{k+1})\sum_{i=1}^k \frac{u_i}{1-u_{k+1}} f(x_i) = \sum_{i=1}^{k+1} u_if(x_i)$<br/><br/>as desired, completing the induction. QED.<br/><br/>--------------------<br/><br/><strong>Definition</strong>: The <em>convex hull</em> of a set $S$ is defined to be the smallest convex set containing $S$. It is the set of all points that can be written as $\displaystyle \sum_{i=1}^n a_ix_i$, where $n$ is a natural number, $a_1, a_2, \ldots, a_n$ are nonnegative reals that sum to $1$, and $x_1, x_2, \ldots, x_n \in S$.<br/><br/>--------------------<br/><br/><strong>Definition</strong>: A <em>vertex</em> of a set $D$ is a point $v \in D$ such that for all $x \in D$ not equal to $v$ and $\lambda > 1$ we have $\lambda v+(1-\lambda)x \not\in D$.<br/><br/>--------------------<br/><br/><strong>Theorem</strong>: Let $V_D = \{v_1, v_2, \ldots, v_k\}$ be the set of vertices of a compact convex set $D$. The convex hull of $V_D$ is $D$.<br/><br/><strong>Example</strong>: The set of vertices of a convex polygon is, in fact, its vertices as traditionally defined in geometry. Any point inside the polygon can be written as a convex combination of its vertices.<br/><br/>--------------------<br/><br/><strong>Theorem</strong>: If $f$ is a real-valued, convex function defined on a compact convex set $D$, then $\displaystyle \max_{x \in D} f(x) = \max_{x \in V_D} f(x)$, where $V_D$ is the set of vertices of $D$.<br/><br/><strong>Proof</strong>: We will show that $\displaystyle f(x) \le \max_{x \in V_D} f(x)$ for all $x \in D$.<br/><br/>Let $\displaystyle x = \sum_{i=1}^k \lambda_i v_i$ where $\lambda_1, \lambda_2, \ldots, \lambda_k$ are nonnegative reals that sum to $1$ and $V_D = \{v_1, v_2, \ldots, v_k\}$. This is possible by the preceding theorem. Then by Jensen's Inequality we get<br/><br/>$\displaystyle \sum_{i=1}^k \lambda_i f(v_i) \ge f\left(\sum_{i=1}^k \lambda_i v_i\right) = f(x)$.<br/><br/>And since $\displaystyle \max_{x \in V_D} f(x) \ge \sum_{i=1}^k \lambda_i f(v_i)$, we have $\displaystyle \max_{x \in V_D} f(x) \ge f(x)$ for all $x \in D$. Furthermore, since $V_D$ is a subset of $D$, we know that this maximum is attained, thus<br/><br/>$\displaystyle \max_{x \in D} f(x) = \max_{x \in V_D} f(x)$<br/><br/>as desired. QED.<br/><br/>--------------------<br/><br/>Comment: This is a very useful result, as it allows us to limit our search for the maximum of a function to its set of vertices, which is usually a considerably smaller set, though it may still be infinite (think sphere). In any case, this works well for constrained optimization problems in which the domain is limited to a polygon, the easiest application of this theorem.<br/><br/>--------------------<br/><br/>Practice Problem: Let $x$ and $y$ be real numbers satisfying $|2x-y| \le 3$ and $|y-3x| \le 1$. Find the maximum value of $f(x, y) = (x-y)^2$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com3tag:blogger.com,1999:blog-2400513859305780710.post-8008141235869762552007-05-08T14:06:00.001-07:002013-10-06T19:14:35.288-07:00A Square of Divisors. Topic: Number Theory. Level: AIME/Olympiad.<strong>Problem</strong>: (1999 Canada - #3) Determine all positive integers $n > 1$ with the property that $n = (d(n))^2$. Here $d(n)$ denotes the number of positive divisors of $n$.<br/><br/><strong>Solution</strong>: So, testing some small numbers yields $n = 9$ as a solution. We claim that this is the only such solution.<br/><br/>Clearly, since $n$ is a square, we can use a variant of the usual prime decomposition and say that $n = p_1^{2e_1} p_2^{2e_2} \cdots p_k^{2e_k}$.<br/><br/>Furthermore, again since $n$ is a square, we know<br/><br/>$d(n) = (2e_1+1)(2e_2+1) \cdots (2e_k+1)$<br/><br/>is odd, so $n = (d(n))^2$ must be odd as well, i.e. $2$ is not one of the $p_i$. Then we use the equation given to us to get<br/><br/>$p_1^{2e_1} p_2^{2e_2} \cdots p_k^{2e_k} = (2e_1+1)^2(2e_2+1)^2 \cdots (2e_k+1)^2$<br/><br/>$p_1^{e_1} p_2^{e_2} \cdots p_k^{e_k} = (2e_1+1)(2e_2+1) \cdots (2e_k+1)$.<br/><br/>Note, however, that by Bernoulli's Inequality (overkill, I know) we have for $i = 1, 2, \ldots, k$<br/><br/>$(1+(p_i-1))^{e_i} \ge 1+(p_i-1)e_i \ge 2e_i+1$<br/><br/>with equality iff $p_i = 3$ and $e_i = 1$. So<br/><br/>$p_1^{e_1} p_2^{e_2} \cdots p_k^{e_k} \ge (2e_1+1)(2e_2+1) \cdots (2e_k+1)$.<br/><br/>Since we want equality, we must have $p_i = 3$ and $e_i = 1$ for all $i$. But since the primes are supposed to be distinct we can have exactly one prime so that $n = 3^2 = 9$ is the only solution. QED.<br/><br/>--------------------<br/><br/>Comment: Another one of those problems that you kind of look at the result and you're like huh, that's interesting. But anyway, just throwing in some weak inequalities led to a pretty straightforward solution. As long as you know how to find the number of divisors of a positive integer it isn't too much of a stretch to figure the rest out, though it make take some time to get in the right direction since the problem is quite open-ended.<br/><br/>--------------------<br/><br/>Practice Problem: (1999 Canada - #4) Suppose $a_1,a_2,\cdots,a_8$ are eight distinct integers from $\{1,2,\ldots,16,17\}$. Show that there is an integer $k > 0$ such that the equation $a_i - a_j = k$ has at least three different solutions. Also, find a specific set of $7$ distinct integers from $\{1,2,\ldots,16,17\}$ such that the equation $a_i - a_j = k$ does not have three distinct solutions for any $k > 0$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com6tag:blogger.com,1999:blog-2400513859305780710.post-14288854460335542042007-05-05T06:31:00.001-07:002013-10-06T19:14:35.231-07:00Ready For AP Calculus? Topic: Calculus/S&S. Level: AIME/Olympiad.<strong>Problem</strong>: Evaluate $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^2 \cdot 2^n}$.<br/><br/><strong>Solution</strong>: Let $\displaystyle f(x) = \sum_{n=1}^{\infty} \frac{x^n}{n^2}$. Then $\displaystyle f^{\prime}(x) = \sum_{n=1}^{\infty} \frac{x^{n-1}}{n} = -\frac{\ln{(1-x)}}{x}$, a well-known Taylor series. So we want to integrate this:<br/><br/>$\displaystyle \int \frac{\ln{(1-x)}}{x}dx = \ln{x} \ln{(1-x)}-\int \frac{\ln{x}}{x-1}dx$<br/><br/>by parts using $u = \ln{(1-x)}$ and $dv = dx/x$. Substituting $t = 1-x$ in the last integral, we have<br/><br/>$\displaystyle \int \frac{\ln{(1-x)}}{x}dx = \ln{x} \ln{(1-x)}-\int \frac{\ln{(1-t)}}{t}dt$.<br/><br/>So $-f(x) = \ln{x} \ln{(1-x)}+f(t)+C = \ln{x} \ln{(1-x)}+f(1-x)+C$. Thus<br/><br/>$f(x)+f(1-x) = C-\ln{x} \ln{(1-x)}$<br/><br/>for some constant $C$. Using our knowledge that $f(0) = 0$, $\displaystyle f(1) = \frac{\pi^2}{6}$, and $\displaystyle \lim_{x \rightarrow 1} \ln{x} \ln{(1-x)}= 0$ by L'Hopital twice, we see that<br/><br/>$f(0)+f(1) = C \Rightarrow C = \frac{\pi^2}{6}$<br/><br/>Then setting $x = \frac{1}{2}$ we obtain $\displaystyle 2f\left(\frac{1}{2}\right) = \frac{\pi^2}{6}-\ln{\frac{1}{2}} \ln{\frac{1}{2}}$ and $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^2 \cdot 2^n} = f\left(\frac{1}{2}\right) = \frac{\pi^2}{12}-\frac{\ln^2{(2)}}{2}$. QED.<br/><br/>--------------------<br/><br/>Comment: This was a pretty tough problem that required you to compound a lot of calculus knowledge all into a single problem - series, integration by parts, limits. Recognizing all the steps was the first part; following through with the right computations was another. Still, there weren't really any super clever tricks, mostly just standard substitutions and approaches applied in a somewhat non-standard way. Makes for a very nice problem.<br/><br/>--------------------<br/><br/>Practice Problem #1: Show that $\displaystyle \lim_{x \rightarrow 1} \ln{x} \ln{(1-x)} = 0$.<br/><br/>Practice Problem #2: Evaluate $\displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n}$. Can you also find $\displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n^3}$?Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com0tag:blogger.com,1999:blog-2400513859305780710.post-33172524884436829142007-04-28T07:46:00.001-07:002013-10-06T19:14:35.244-07:00Just A "Natural" Thing. Topic: Algebra. Level: AIME/Olympiad.<strong>Problem</strong>: Factor $7^{6(2k-1)}-7^{5(2k-1)}+7^{4(2k-1)}-7^{3(2k-1)}+7^{2(2k-1)}-7^{2k-1}+1$.<br/><br/><strong>Solution</strong>: Consider the polynomial $x^7+1$. We have<br/><br/>$x^7+1 = (x+1)(x^6-x^5+x^4-x^3+x^2-1)$,<br/><br/>so we want to factor the second term when $x = 7^{2k-1}$. Call it $P(x)$ so that $P(x) = \frac{x^7+1}{x+1}$. Consider the relation<br/><br/>$(x+1)^7-(x^7+1) = 7x^6+21x^5+35x^4+35x^3+21x^2+7x$.<br/><br/>Since $-1$ is a root of the LHS, we factor it out of the RHS as well to get<br/><br/>$(x+1)^7-(x^7+1) = 7x(x+1)(x^4+2x^3+3x^2+2x+1) = 7x(x+1)(x^2+x+1)^2$.<br/><br/>Dividing through by $x+1$ and rearranging, we obtain the nice expression<br/><br/>$P(x) = (x+1)^6-7x(x^2+x+1)^2$.<br/><br/>Letting $x = 7^{2k-1}$, this becomes<br/><br/>$P(7^{2k-1}) = (7^{3(2k-1)}+3 \cdot 7^{2(2k-1)}+3 \cdot 7^{2k-1}+1)^2-7^{2k}(7^{2(2k-1)}+7^{2k-1}+1)^2$<br/><br/>which is conveniently enough a difference of two squares. And we'll leave it as this because the factors are not particularly nice or anything. QED.<br/><br/>--------------------<br/><br/>Comment: This "natural" factorization was basically the one crucial step to solving the USAMO #5 this year and most people did not see it, unsurprisingly. Taking the difference $(x+1)^7-(x^7+1)$ was the trickiest/cleverest part, and there were definitely a limited number of approaches to this factorization. Oh well, at least it seems sort of cool after you know about it.<br/><br/>--------------------<br/><br/>Practice Problem: (2007 USAMO - #5) Prove that for every nonnegative integer n, the number $7^{7^{n}}+1$ is the product of at least $2n+3$ (not necessarily distinct) primes.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com1tag:blogger.com,1999:blog-2400513859305780710.post-51191740155981072192007-04-23T14:10:00.001-07:002013-10-06T19:14:35.257-07:002007 USAMO.Good luck to everyone on the USAMO tomorrow and Wednesday!Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com0tag:blogger.com,1999:blog-2400513859305780710.post-32003884235461348532007-04-18T12:10:00.001-07:002013-10-06T19:14:35.242-07:00Something To Think About. Topic: Probability. Level: AMC/AIME.<strong>Problem</strong>: A coin is repeatedly flipped. What is the expected number of flips to get two heads in a row? $n$ heads in a row?<br/><br/>--------------------<br/><br/>I will be out of town for the next four days, so have fun with the problem!Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com5tag:blogger.com,1999:blog-2400513859305780710.post-14185534937803176322007-04-15T10:43:00.001-07:002013-10-06T19:14:35.230-07:00Sense Of Poise And Rationality. Topic: Algebra. Level: AIME/Olympiad.<strong>Problem</strong>: (1993 Canada National Olympiad - #2) Show that $x$ is rational if and only if three distinct terms that form a geometric progression can be chosen from the sequence $x, x+1, x+2, x+3, \ldots$.<br/><br/><strong>Solution</strong>: We will prove the "if" statement first, i.e. $x$ is rational implies we can find the geometric progression. Let $x = \frac{p}{q}$. The terms of the sequence are then<br/><br/>$\frac{p}{q}$, $\frac{p+q}{q}$, $\frac{p+2q}{q}$, $\frac{p+3q}{q}, \ldots$.<br/><br/>We simply choose the terms<br/><br/>$\frac{p}{q}$, $\frac{p+pq}{q}$, $\frac{p+(2p+pq)q}{q}$ which are the same as $\frac{p}{q}$, $\frac{p(1+q)}{q}$, $\frac{p(1+q)^2}{q}$,<br/><br/>clearly a geometric progression.<br/><br/>Now for the other direction, assume $x+a, x+b, x+c$ are a geometric progression with $a < b < c$ positive integers. Then<br/><br/>$\frac{x+a}{x+b} = \frac{x+b}{x+c} \Rightarrow x^2+(a+c)x+ac = x^2+2bx+b^2 \Rightarrow (a+c)x+ac = 2bx+b^2$.<br/><br/>Solving for $x$ yields $x = \frac{b^2-ac}{a+c-2b}$ which is clearly rational, as desired. QED.<br/><br/>--------------------<br/><br/>Comment: An interesting "definition" of a rational number, pretty neat in fact. Though it may not seem so at first, the "if" direction was actually considerably harder than the "only if" direction because the approach was not as straightforward. The "only if" proof required simple algebra, while the "if" proof needed a little creative picking and choosing. Another approach for the "if" direction is to set $x = \frac{p}{q}$ and try to find corresponding $a, b, c$ but that makes it a little more difficult than necessary.<br/><br/>--------------------<br/><br/>Practice Problem: (1993 Canada National Olympiad - #3) In triangle $ABC$, medians to the sides $\overline{AB}$ and $\overline{AC}$ are perpendicular. Show that $\cot{B}+\cot{C} \ge \frac{2}{3}$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com2tag:blogger.com,1999:blog-2400513859305780710.post-66170771374807336382007-04-12T12:39:00.001-07:002013-10-06T19:14:35.275-07:00Fishy Triangular Number. Topic: Inequalities. Level: AIME.<strong>Problem</strong>: (2001 Poland Finals - #1) Prove the following inequality:<br/><br/>$x_1+2x_2+3x_3+\cdots+nx_n \le \frac{n(n-1)}{2}+x_1+x_2^2+x_3^3+\cdots+x_n^n$<br/><br/>where $x_i$ are positive reals.<br/><br/><strong>Solution</strong>: Like the title says, that triangular number looks really fishy... let's write it as $1+2+\cdots+(n-1)$ and pair up the terms on the RHS.<br/><br/>$\frac{n(n-1)}{2}+x_1+x_2^2+x_3^3+\cdots+x_n^n = x_1+(x_2^2+1)+(x_3^3+2)+\cdots+(x_n^n+n-1)$.<br/><br/>We claim that $x_i^i+i-1 \ge ix_i$ for $i = 1, 2, \ldots, n$. We'll use our good friend AM-GM to show this; in fact, it is quite simple.<br/><br/>$\frac{x_i+1+1+\cdots+1 \text{(i-1 times)}}{i} \ge \sqrt[i]{x_i^i} = x_i$ so $x_i^i+i-1 \ge ix_i$<br/><br/>as desired. Sum them up to get our inequality. QED.<br/><br/>--------------------<br/><br/>Comment: Just a clever little application of AM-GM; apparantly not a strong inequality at all, and the only equality case is $x_i = 1$ for all $i$. Nevertheless, slightly on the tricky side which makes it sufficiently satisfying to solve.<br/><br/>--------------------<br/><br/>Practice Problem: (2006 Poland Finals - #1) Solve in reals:<br/><br/>$a^2 = b^3+c^3$<br/>$b^2 = c^3+d^3$<br/>$c^2 = d^3+e^3$<br/>$d^2 = e^3+a^3$<br/>$e^2 = a^3+b^3$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com4tag:blogger.com,1999:blog-2400513859305780710.post-45377221313730030052007-04-10T10:10:00.001-07:002013-10-06T19:14:35.251-07:002007 Bellevue BATH.The 2007 Bellevue BATH will be taking place on May 19, 2007. It will be the best competition of the year! :) The information sheet can be found <a href="http://wangsblog.com/jeffrey/BellevueBATH.doc">here</a>. Sign up today!Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com2tag:blogger.com,1999:blog-2400513859305780710.post-57386699731288101712007-04-09T16:14:00.001-07:002013-10-06T19:14:35.268-07:00Get A Tan. Topic: Trigonometry. Level: AMC.<strong>Problem</strong>: (2007 MAO State - Gemini) Let $x$ be in degrees and $0^{\circ} < x < 45^{\circ}$. Solve for $x$: $\tan{(4x)} = \frac{\cos{x}-\sin{x}}{\cos{x}+\sin{x}}$.<br/><br/>Solution: Here's a nice tangent identity that is not very well-known, but rather cool. Start with the regular tangent angle addition identity,<br/><br/>$\tan{(x+y)} = \frac{\tan{x}+\tan{y}}{1-\tan{x} \cdot \tan{y}}$.<br/><br/>Letting $x = 45^{\circ}$ and $y = -\theta$, we obtain<br/><br/>$\tan{(45^{\circ}-\theta)} = \frac{1-\tan{\theta}}{1+\tan{\theta}} = \frac{\cos{\theta}-\sin{\theta}}{\cos{\theta}+\sin{\theta}}$.<br/><br/>Well, look at that. It's the same expression as the RHS of the equation we want to solve. Substituting accordingly, it remains to solve<br/><br/>$\tan{(4x)} = \tan{(45^{\circ}-x)} \Rightarrow 4x = 45^{\circ}-x \Rightarrow x = 9^{\circ}$.<br/><br/>QED.<br/><br/>--------------------<br/><br/>Comment: This identity is a nice one to keep around because it can turn up unexpectedly. Especially when you see that exact form and you're like "whoa this is such a nice form there must be an identity for it." So there.<br/><br/>--------------------<br/><br/>Practice Problem: (2007 MAO State - Gemini) One hundred positive integers, not necessarily distinct, have a sum of $331$. What is the largest possible product these numbers can attain?Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com3tag:blogger.com,1999:blog-2400513859305780710.post-39142676653427749552007-04-03T12:14:00.001-07:002013-10-06T19:14:35.249-07:00MAO Results.Check them out online!<br/><br/><a href="http://www.wamath.net/contests/StateMAT/">http://www.wamath.net/contests/StateMAT/</a>Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com2tag:blogger.com,1999:blog-2400513859305780710.post-92090597517704343352007-04-01T08:02:00.001-07:002013-10-06T19:14:35.296-07:00Sumsine. Topic: Trigonometry/Geometry. Level: AMC.<strong>Problem</strong>: (2007 MAO State - Gemini) Find the sum of the sines of the angles of a triangle whose perimeter is five times as large as its circumradius.<br/><br/><strong>Solution</strong>: At first, this problem seems very strange because we do not expect a nice relationship between the perimeter and the circumradius to offer much, but take another look. Sines, circumradius... Law of Sines! In fact, we get<br/><br/>$\frac{a}{\sin{A}} = \frac{b}{\sin{B}} = \frac{c}{\sin{C}} = 2R$<br/><br/>so we can rewrite $\sin{A}+\sin{B}+\sin{C}$ as<br/><br/>$\sin{A}+\sin{B}+\sin{C} = \frac{a}{2R}+\frac{b}{2R}+\frac{c}{2R} = \frac{a+b+c}{2R} = \frac{5R}{2R} = \frac{5}{2}$.<br/><br/>QED.<br/><br/>--------------------<br/><br/>Comment: Admittedly, MAO does not exactly have a plethora of cool problems, but this one was not bad. It wasn't too difficult, but it required some clever use of well-known identities to pull it off. Given that half of the problems were computationally intensive (think AIME I but worse), this was a nice relief in the middle of the contest.<br/><br/>--------------------<br/><br/>Practice Problem: (2007 MAO State - Gemini) The zeros of $f(x) = 7x^3-4x^2+K$ form an arithmetic sequence. Let $K = \frac{m}{n}$, where $m$ and $n$ are relatively prime positive integers. Find the value of $m+n$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com2tag:blogger.com,1999:blog-2400513859305780710.post-20152824515914826772007-03-31T14:07:00.001-07:002013-10-06T19:14:35.235-07:002007 MAO.Good job to all Bellevue people at MAO State! We placed 3rd in Sweepstakes! That's the highest we've ever gotten in my four years :).Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com1tag:blogger.com,1999:blog-2400513859305780710.post-18481444548638392202007-03-27T10:56:00.001-07:002013-10-06T19:14:35.234-07:00Condensation Sensation. Topic: Calculus/S&S.<strong>Theorem</strong>: (Cauchy Condensation Test) If $\displaystyle \{a_n\}_{n = 0}^{\infty}$ is a monotonically decreasing sequence of positive reals and $p$ is a positive integer, then<br/><br/>$\displaystyle \sum_{n=0}^{\infty} a_n$ converges if and only if $\displaystyle \sum_{n=0}^{\infty} p^n a_{p^n}$ converges.<br/><br/>--------------------<br/><br/><strong>Problem</strong>: Determine the convergence of $\displaystyle \sum_{n=2}^{\infty} \frac{1}{(\ln{n})^k}$, where $k$ is a positive real.<br/><br/><strong>Solution</strong>: Well, let's apply the Cauchy condensation test. Then we know that<br/><br/>$\displaystyle \sum_{n=2}^{\infty} \frac{1}{(\ln{n})^k}$<br/><br/>converges if and only if<br/><br/>$\displaystyle \sum_{n=2}^{\infty} \frac{p^n}{(\ln{p^n})^k} = \sum_{n=2}^{\infty} \frac{p^n}{(n \ln{p})^k} = \frac{1}{(\ln{p})^k} \sum_{n=2}^{\infty} \frac{p^n}{n^k}$<br/><br/>does. But this clearly diverges due to the fact that the numerator is exponential and the denominator is a power function. QED.<br/><br/>--------------------<br/><br/>Comment: This is a pretty powerful test for convergence, at least in the situations in which it can be applied. The non-calculus proof for the divergence of the harmonic series is very similar to the Cauchy condensation test; in fact, the condensation test would state that<br/><br/>$\displaystyle \sum_{n=1}^{\infty} \frac{1}{n}$ converges iff $\displaystyle \sum_{n=1}^{\infty} \frac{p^n}{p^n}$ converges,<br/><br/>which clearly shows that the harmonic series diverges.<br/><br/>--------------------<br/><br/>Practice Problem: Determine the convergence of $\displaystyle \sum_{n=2}^{\infty} \frac{1}{(\ln{n})^{\ln{n}}}$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com5tag:blogger.com,1999:blog-2400513859305780710.post-56001951789898676382007-03-26T09:33:00.001-07:002013-10-06T19:14:35.262-07:00Hey Now. Topic: Inequalities. Level: AIME.<strong>Problem</strong>: Let $a, b, c$ be positive reals such that $a+b+c = 3$. Prove that $\sqrt{a}+\sqrt{b}+\sqrt{c} \ge ab+bc+ca$.<br/><br/><strong>Solution</strong>: We play around and guess that $\sqrt{a} \ge \frac{ab+ac}{2}$ because that would be convenient. Indeed, replacing $b+c$ with $3-a$, this is equivalent to<br/><br/>$\sqrt{a} \ge \frac{a(3-a)}{2} \Leftrightarrow 4a \ge a^2(3-a)^2 \Leftrightarrow (a-1)^2(4-a) \ge 0$,<br/><br/>which is clearly true for $0 < a < 3$. So we get the three inequalities<br/><br/>$\sqrt{a} \ge \frac{ab+ac}{2}$, $\sqrt{b} \ge \frac{bc+ba}{2}$, $\sqrt{c} \ge \frac{ca+cb}{2}$.<br/><br/>Adding them up, we have the desired $\sqrt{a}+\sqrt{b}+\sqrt{c} \ge ab+bc+ca$. QED.<br/><br/>--------------------<br/><br/>Comment: I found this solution to be pretty clever, as the initial "guess" is not trivially true. But the whole thing works out quite nicely with symmetry so it's all good. Inequality problems usually require several random ideas and inspiration before finding the crux step.<br/><br/>--------------------<br/><br/>Practice Problem: Let $a, b, c$ be positive reals such that $abc = 1$. Prove that $(a+b)(b+c)(c+a) \ge 2(1+a+b+c)$.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com2tag:blogger.com,1999:blog-2400513859305780710.post-22749345679660851812007-03-20T12:48:00.001-07:002013-10-06T19:14:35.241-07:00This Integral Not-Diverges. Topic: Calculus/S&S.<strong>Problem</strong>: Show that the integral $\displaystyle \int_0^{\infty} \frac{\sin{x}}{x}dx$ converges.<br/><br/><strong>Solution</strong>: Consider the intervals $[2k \pi, (2k+2) \pi]$ for $k = 0, 1, 2, \ldots$. We can rewrite the given integral as<br/><br/>$\displaystyle \int_0^{\infty} \frac{\sin{x}}{x}dx = \sum_{k=1}^{\infty} \int_{2k \pi}^{(2k+2) \pi} \frac{\sin{x}}{x} dx+C$,<br/><br/>where $C$ is some unimportant constant. So how can we go about bounding the integral<br/><br/>$\displaystyle \int_{2k \pi}^{(2k+2) \pi} \frac{\sin{x}}{x} dx$?<br/><br/>Well, first note that $\sin{x} = - \sin{(x+\pi)}$ so we can say<br/><br/>$\displaystyle \int_{2k \pi}^{(2k+2) \pi} \frac{\sin{x}}{x} dx = \int_{2k \pi}^{(2k+1)\pi} \left( \frac{\sin{x}}{x}+\frac{\sin{(x+\pi)}}{x+\pi} \right) dx = \int_{2k \pi}^{(2k+1)\pi} \left( \frac{\sin{x}}{x}-\frac{\sin{x}}{x+\pi}\right) dx$.<br/><br/>Then, putting the last expression under a common denominator, we get<br/><br/>$\displaystyle \int_{2k \pi}^{(2k+1)\pi} \left( \frac{\sin{x}}{x}-\frac{\sin{x}}{x+\pi}\right) dx = \int_{2k \pi}^{(2k+1)\pi} \frac{\pi \sin{x}}{x(x+\pi)} dx$,<br/><br/>which we can easily bound with $\sin{x} \le 1$ and $x \ge 2k \pi$. This gives us<br/><br/>$\displaystyle \int_{2k \pi}^{(2k+1)\pi} \frac{\pi \sin{x}}{x(x+\pi)} dx < [(2k+1)\pi-2k \pi] \cdot \frac{\pi}{(2k \pi)(2k \pi + \pi)} = \frac{1}{2k(2k+1)}$.<br/><br/>Hence we know that<br/><br/>$\displaystyle \int_0^{\infty} \frac{\sin{x}}{x}dx = \sum_{k=1}^{\infty} \int_{2k \pi}^{(2k+2) \pi} \frac{\sin{x}}{x} dx+C < \sum_{k=1}^{\infty} \frac{1}{2k(2k+1)}+C$<br/><br/>and this converges by a $p$-series test. QED.<br/><br/>--------------------<br/><br/>Comment: A pretty neat problem, though it is a standard convergence/divergence exercise. I'm sure there are many ways of doing this, but it's always nice to come up with a cool way of showing that a series converges or diverges. It's also interesting to note that the practice problem integral, which is only slightly different from this one, diverges.<br/><br/>--------------------<br/><br/>Practice Problem: Show that the integral $\displaystyle \int_0^{\infty} \frac{|\sin{x}|}{x}dx$ diverges.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com6tag:blogger.com,1999:blog-2400513859305780710.post-66273005148514710492007-03-18T10:52:00.001-07:002013-10-06T19:14:35.276-07:00LaTeX In Comments.The function to use LaTeX in comments has been added. Simply enclose your LaTeX code in <code>[ tex ]</code> and <code>[ /tex ]</code> tags (without spaces).Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com1tag:blogger.com,1999:blog-2400513859305780710.post-2090642277112576572007-03-17T14:56:00.001-07:002013-10-06T19:14:35.238-07:00Frogger. Topic: Combinatorics. Level: AIME.<strong>Problem</strong>: (2007 AIMEI - #6) A frog is placed at the origin on the number line, and moves according to the following rule: in a given move, the frog advances to either the closest point with a greater integer coordinate that is a multiple of $3$, or to the closest point with a greater integer coordinate that is a multiple of $13$. A <em>move sequence</em> is a sequence of coordinates which correspond to valid moves, beginning with $0$, and ending with $39$. For example, $0, 3, 6, 13, 15, 26, 39$ is a move sequence. How many move sequences are possible for the frog?<br/><br/><strong>Solution</strong>: Let $S_k$ be the number of possible move sequences starting from $0$ and ending up at $k$. Clearly it is only interesting when $k$ is divisible by $3$ or $13$. So let's try to construct the sequence.<br/><br/>$S_0 = 1$ is our base case. There is only one way to get to the value $3$ from $0$, so $S_3 = 1$. Similarly, $S_6 = 1$, $S_9 = 1$, and $S_{12} = 1$. But how many ways can we get to $13$? Well, if we start at any of the previously mentioned numbers there we can go straight to $13$ from them (by using the next multiple of $13$ move). So<br/><br/>$S_{13} = S_0+S_3+S_6+S_9+S_{12} = 5$.<br/><br/>Now how about $S_{15}$? Well we can get there from either $12$ or $13$, so<br/><br/>$S_{15} = S_{12}+S_{13} = 6$.<br/><br/>Continuing, we have $S_{18} = S_{21} = S_{24} = 6$ because we can only get to these from the previous multiples of $3$. Then<br/><br/>$S_{26} = S_{13}+S_{15}+S_{18}+S_{21}+S_{24} = 29$.<br/><br/>In this similar fashion, we obtain $S_{27} = S_{30} = S_{33} = S_{36} = 35$ and finally $S_{39} = S_{26}+S_{27}+S_{30}+S_{33}+S_{36} = 169$. QED.<br/><br/>--------------------<br/><br/>Comment: This is a cool problem, though they could have made it harder to brute force (i.e. have the answer be $500+$) to really weed out the people who did not actually know how to intelligently count it. It is based on a nice recursive concept known as "dynamic programming," which is one of the coolest computer science techniques ever. The idea is to use solutions of subproblems to construct the solution to a larger problem, like in this case using smaller values of $S_k$ to calculate $S_{39}$.<br/><br/>--------------------<br/><br/>Practice Problem: (2007 AIMEI - #1) How many positive perfect squares less than $10^6$ are multiples of $24$?Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com4tag:blogger.com,1999:blog-2400513859305780710.post-10509328478786660762007-03-15T15:25:00.001-07:002013-10-06T19:14:35.228-07:002007 AIME I Report.So, overall, it appears that this AIME was not particularly <em>difficult</em>, but allowed people to make copious mistakes and drop their scores significantly. Cool... not. But anyway, some brief comments.<br/><br/>1. Decent NT, a good problem to start off with, though slightly more difficult than it usually is.<br/>2. BIG paragraph, lots of text. Tripped up many people due to misreadings, but otherwise decent.<br/>3. Easy? Just expand.<br/>4. Stupid, more a test of your ability to think of what the test writers were thinking of than math.<br/>5. Kind of lame, but easy enough to not be a problem. Just takes a little time.<br/>6. Cool problem, dynamic programming comes in handy here.<br/>7. Also pretty nice, good exercise in logs and summations.<br/>8. Not bad either, I did not recognize the solution at first. In the end, it came down to the Euclidean Algorithm for polynomials basically.<br/>9. Decent geometry, but quite a long problem.<br/>10. What? The only reasonable way to do this was brute force counting. Anything else would have been too hard to come up with.<br/>11. Too easy and well-known. Something similar but more difficult has shown up on the Putnam.<br/>12. Eww... anything involving three different radicals should not be allowed.<br/>13. Still not sure how to do this, but it seemed reasonable to most people.<br/>14. Too easy to guess, but a cool problem nonetheless. Recurrences are a lot cooler than ugly geometry or counting.<br/>15. Way beyond me, I do not like geometry problems as number 15.<br/><br/>So there we go! You can find the problems <a href="http://www.artofproblemsolving.com/Forum/resources.php?c=182&cid=45&year=2007">here</a> if you so wish.Jeffrey Wanghttp://www.blogger.com/profile/11114458640271201663noreply@blogger.com3