| Being a limey, I'm not familiar with the Putnam competition.
It's certainly a mixed bag of problems: from straightforward or book
knowledge, through neat problems.
-------------------------------------------------------------------------------
Problem A-4:
(a) If every point on the plane is painted one of three colors, do
there necessarily exist two points of the same color exactly one inch
apart?
(b) What if "three" is replaced by "nine"?
Justify your answers.
Answer A-4: Book, but neat
See Introduction to Graph Theory by Bela Bollobas, Springer
Verlag, standard yellow hardback.
(a) Consider equilateral triangles, side 1": ABC, CBD, AEF, FEG, where
D and G are 1" apart, colour these points, avoiding if possible having two
points of the same colour 1" apart. A and D are forced to be the same colour.
A and G are forced to be the same colour. But now D and G are the same
colour, and are 1" apart.
(b) Seven colours. Tile the plane with hexagons. Pick one hexagon and
colour it and the six surrounding hexagons all different colours. Call this
(after non-numerate wargamers) a megahex. We can colour *all* the hexagons,
so that the the plane is tiled with megahexes, preserving orientation. Then
if we've chosen the side, d, of the hexagons correctly, no two points in the
same hexagon are as far as 1 apart ie. d < �. Further, no two hexagons of
the same colour are as near as 1. This implies d > 1/(sqrt(7).
Notes: (1) This self-similar pattern of one hexagon matching to seven smaller
ones I have seen proposed as a way off assigning call frequencies for cellular
phones over a city. I don't know whether it's been implemented.
The fate of 4,5 or 6 colours is, I believe, still unknown. Since 7 is OK,
9 is trivially OK. With 9, I guess you could play with squares instead of
hexagons, and have one square mapping onto 9 smaller ones.
-------------------------------------------------------------------------------
Problem A-5: Prove that there exists a *unique* function f from the
set R+ of positive real numbers to R+ such that
f(f(x)) = 6x - f(x) and f(x) > 0 for all x > 0.
Answer A-5: Neat
I love this one, it's so cute.
Firstly, there *is* a function which satisfies the criteria, namely
F(x) = 2x.
Next, let f_j(x) be f applied j times to x. In matrix notation:
( f_j(x) ) = ( 0 1 ) * ( f_(j-1)(x) )
( f_(j+1)(x) ) ( 6 -1 ) ( f_j(x) )
By applying this j times, diagonalizing (eigenvalues 2 and -3) and
simplifying out, we can only get:
f_j(x) = ( a.2^j + b.(-3)^j )*x
+ ( c.2^j + d.(-3)^j )*f(x)
We can solve for a,b,c,d, knowing the values for j=0 & 1.
f_j(x) = ( 3.2^j + 2.(-3)^j )*x/5
+ ( 2^j - (-3)^j )*f(x)/5
*But* f_j(x) > 0 for all x > 0.
So f(x)/x <> (3.2^j + 2.(-3)^j) / (-2^j + (-3)^j) = S(j)
where the sign <> is > for j odd
< for j even
So f(x)/x is bracketed by the convergent series S(j). Therefore f(x)/x
must be the limit of that series, which is 2.
-------------------------------------------------------------------------------
Problem A-6: If a linear transformation A on an n-dimensional vector
space has n + 1 eigenvectors such that any n of them are linearly
independent, does it follow that A is a scalar multiple of the
identity? Prove your answer.
Answer A-6: Is there a neat answer?
u(i) i=0..n are eigenvectors.
Suppose that there are eigenspaces E(j) j=1..J and that J>1. Each
eigenvector u(i) lies in exactly one eigenspace, and the sum of the
dimensionalities, d(j), of the eigenspaces is n. Therefore there is an
eigenspace E(k) which has more than d(k) eigenvectors. These eigenvectors
are therefore not linearly independent. Any set of n out of the n+1 u(i)
which includes all of these will therefore be linearly dependent. So J=1.
The next question, are there any off-diagonal terms in the Jordan
canonical form of A? If there are, then the geometric multiplicity of A
is less than n, ie. there are *no* sets of n linearly independent eignvectors
at all. So by contradiction, there are no off-diagonal terms. Thus A is
aI for some scalar I.
Andrew
|
|
Problem B-5: For positive integers n, let M(n) be the 2n + 1 by 2n + 1
skew-symmetric matrix for which each entry in the first n subdiagonals
below the main diagonal is 1 and each of the remaining entries below
the main diagonal is -1. Find, with proof, the rank of M(n).
(According to the definition the rank of a matrix is the largest k
such that there is a k x k submatrix with non-zero determinant.)
One may note that M(1) = ( 0 -1 1 ) and M(2) = ( 0 -1 -1 1 1 )
( 1 0 -1 ) ( 1 0 -1 -1 1 )
( -1 1 0 ) ( 1 1 0 -1 -1 )
( -1 1 1 0 -1 )
( -1 -1 1 1 0 )
Answer B-5: Slightly fiddly to explain, but a simple clean idea.
Let m = 2n+1
M = M(n) is a cyclic matrix, ie. if P is the permutation matrix:
P(i,j) = 1 iff j=i+1 (mod m)
0 otherwise
then P.M = M.P
Now the eigenvectors of a cyclic matrix are the eigenvectors of P
although the eigenvalues may be different.
[To see this: let w be a primitive
mth root of 1. The eigenvectors x(j) are the transposes of:
(1, w^(j)..., w^(kj),... w^((m-1)j))
for j = 0 to m-1, with corresponding eigenvalues a(j) = w^(j). So all the
eigenvalues are distinct.
If x is an eigenvector of P, eigenvalue a, then:
P.M.x = M.P.x = M.a.x. = a.M.x.
ie. M.x is an eigenvector of P, with the same eigenvalue as x has.
But the eigenspace of a has dimension 1. Thus M.x = b.x for some b.
I.e. x is an eigenvector of M as well, with eigenvalue b.]
So how many of the b(j) are 0? We know need to use more of our
knowledge about M.
b(0).x(0) = M.x(0) where x(0) is the transpose of (1,1,... 1). So
b(0) is evidently the sum of any row of M, which is zero.
To show that b(j) =/= 0 for j>0, take M.x(j) and subtract the first
element from the second. This gives:
1 + w^(j) - 2.w^(-2j)
which cannot be zero.
With the multiplicity of 0, m(0) = 1, it follows that rank(M(n)) = 2n.
-------------------------------------------------------------------------------
Problem B-6: Prove that there exist an infinite number of ordered
pairs (a,b) of integers such that for every positive integer t the
number at + b is a triangular number if and only if t is a triangular
number. (The triangular numbers are the t(n) = n(n + 1)/2 with n in
{0,1,2,...}.)
Answer A-6: Some algebra to support John Munzer's powerful intuition.
Number pairs of the form ( (2r+1)�, �r(r+1) ) where r is a natural
number, satisfy the required property. Thus there are an infinite number
of such ordered pairs.
For suppose that t is a triangle �i(i+1), then
at + b = (2r+1)�.�i(i+1) + �r(r+1)
= �( (2r+1)i + r ).( (2r+1)i + r+1 )
Now suppose that t is such that at + b = �u(u+1).
Write:
u = (2r+1)i + s, where s is in {0,1,... 2r}
(2r+1)�.t + �r(r+1) = �(2r+1)�.i� + �(2r+1)(2s+1)i + �s(s+1)
Multiply each side by 8, and add 1:
8(2r+1)�.t + (2r+1)� = 8(2r+1)�.i� + 4(2r+1)(2s+1)i + (2s+1)�
Suppose that p is a prime, with c,d chosen so that:
(p^c).R = 2r+1, (p^d).S = 2s+1, with p not dividing r or s.
Suppose further, if possible that c > d.
(8t+1).R�.p^(2(c-d)) = 8i�.R�p^(2(c-d)) + 4i.RSp^(c-d) + S�
ie p divides S, contrary to hypothesis.
So 2r+1 divides 2s+1, and given the bounds on s therefore r=s,
and t = �i(i+1). So there we are.
Note that if (a,b) and (A,B) are two pairs which deliver satisfy the
requirements of the problem, we can generate another by composition:
(a,b)~(A,B) = (aA, aB+b). (This doesn't give us any new solutions from
what we had before, however).
The trivial solution (1,0) is the unique identity, left and right, of this
composition, so to generate an infinite number of pairs, all one needs is
a single non-trivial example, eg (9,1).
Although this solution is expressed algebraically, for me the intuition was
trying to stick triangles together to make a bigger triangle, ensuring that
I never overlapped or missed an edge (although corners I could be more
cavalier about, since they would lead to the constant b term). I seem to
have self-similarity on the brain at the moment, I see it everywhere.
------------------------------------------------------------------------------
|
| I've been typing these in for a while, so I haven't read .6
yet. A-6 and B-2 aren't here.
Dan
Problem A-1: Let R be the region consisting of the points (x,y) of the
cartesian plane satisfying both |x| - |y| <= 1 and |y| <= 1. Sketch
the region R and find its area.
The second condition limits the points to below the line y=1
and above the line y=-1. The first condition translates in
the different quadrants to
Quadrant |x| |y| condition
-------- --- --- ---------
I x y x - y <= 1
II -x y - x - y <= 1
III -x -y - x + y <= 1
IV x -y x + y <= 1
The resulting area is the border and interior of the polygon
you get by the line segments from (-2,1) to (2,1) to (1,0)
to (2,-1) to (-2,-1) to (-1,0) back to (-2,1). The area is
made up of four squares (each of area 1) and four triangles
(each of area 1/2) for a total area of 6.
************************************************************************
Problem A-2: A not uncommon calculus mistake is to believe that the
product rule for derivatives says that (fg)' = f'g'. If f(x) =
exp(x^2), determine, with proof, whether there exists an open interval
(a,b) and a non-zero function g defined on (a,b) such that the wrong
product rule is true for x in (a,b).
The proof is in reply .2. If there is such a function g, it
must satisfy g'(x) = g(x) + ( g'(x) / (2x) ). Such a g
exists, g(x) = exp(x) sqrt(2x - 1), in any open interval (a,b)
with 1/2 < a < b <= oo.
************************************************************************
Problem A-3: Determine, with proof, the set of real numbers x for
which sum from n=1 to infinity of ((1/n)csc(1/n) - 1)^x converges.
csc(1/n) = 1 / sin(1/n). For the n in question, n = 1, 2,
3, ... the argument 1/n to sin is between 0 and pi/2
radians, and so 0 < sin(1/n) < 1. Therefore 1 < csc(1/n),
and there are no worries about dividing by zero. So rewrite
(1/n)csc(1/n) as 1 / (n sin(1/n)). This can be expanded out
as 1 / (n * (1/n - 1/(3! n^3) + 1/(5! n^5) - ...)) which is
1 / (1 - 1/(6n^2) + <terms in 1/n^4 and higher powers>) which
is 1 + 1/(6n^2) + <terms in 1/n^4 and higher powers>.
Subtracting 1 and raising to the x power gives
((1/n)csc(1/n) - 1)^x is approx. (1/(6n^2))^x
So we're not raising any negative numbers to a nonintegral
power, either. The series will converge or diverge the same
as the series summing (1/n^2)^x, which converges for x > 1/2
and diverges for x <= 1/2.
I don't feel this is quite rigorous enough yet, i.e., I have
determined the answer but not proven it. At this point I
would go on to the rest of the problems and come back after
finishing them to show that there are constants A and B such
that for every n,
A(1/n^2) <= ((1/n)csc(1/n) - 1) <= B(1/n^2)
and this makes the part about converging or diverging as
does (1/n^2)^2 rigorous. You can play fast and loose with
series manipulations and approximations to get the result,
but at some point you have to show that those steps were
justified, because they are not valid in every situation.
************************************************************************
Problem A-4:
(a) If every point on the plane is painted one of three colors, do
there necessarily exist two points of the same color exactly one inch
apart?
(b) What if "three" is replaced by "nine"?
Justify your answers.
This is in reply .3. I saw solutions for (a) in the USENET
posting that I got the set of problems from. The easiest
was to set up a sequence of equilateral triangles of side
one like this:
. . .
. . . . (some of the triangles are upside down)
Color the two leftmost points A and B (let C be the third
color) like this:
B . .
A . . .
Now the only way to continue coloring these so that no two
points one inch apart have the same color is:
B A C
A C B A
So if no two points exactly one inch apart have the same
color, then every two points exactly three inches apart must
have the same color (the demonstration above does show that).
Now take a circle of radius three; every point on it has the
same color as the center, and there is a chord of length one
connecting two points of the circle. So every coloring of
the plane with at most three colors must have two points
exactly one inch apart with the same color.
For part b I also worked out the hexagonal pattern in reply
.3 before reading .3 but after having seen a comment in the USENET
postings that 7 colors was enough to keep points an inch apart
differently colored. Seven suggested a hexagon inside other
hexagons.
************************************************************************
Problem A-5: Prove that there exists a *unique* function f from the
set R+ of positive real numbers to R+ such that
f(f(x)) = 6x - f(x) and f(x) > 0 for all x > 0.
Also in reply .3. Assume there is a linear solution f(x) =
ax + b. Then f(f(x)) = a(ax + b) + b = a^2x + (ab + b) =
6x - f(x) = 6x - ax - b. So
a^2 = 6 - a and ab + b = -b
The second implies that either a = -2 or b = 0. Since a =
-2 would violate the condition that f(x) > 0, it must be the
case that b = 0. The first condition implies a = 2 or a =
-3, and again -3 can be ruled out because f(x) > 0. But a =
2 and b = 0 works; f(x) = 2x satisfies f(x) > 0 for x > 0
and f(f(x)) = 6x - f(x).
Proving that this is the only solution is harder. The
approach in .3 does it, once you realize that he decomposed
the matrix A = 0 1 into a product -1
6 -1 SDS
-1
with S = 1 1 and S = 3/5 1/5 and D = 2 0
2 -3 2/5 -1/5 0 -3
n -1 n n -1 n n
Then A = (SDS ) = SD S where D = 2 0
n
0 (-3)
The diagonal of D are the eigenvalues, S the matrix of
column eigenvectors.
Anyway, I tried to show f was unique by taking any solution
f(x) and letting it be f(x) = 2x + g(x), then trying to show
g(x) = 0. We start with 0 < f(x) and 0 < f(f(x)) = 6x - f(x)
so we have 0 < f(x) < 6x or -2x < g(x) < 4x.
Now f(f(x) = 2f(x) + g(f(x))
= 2(2x + g(x)) + g(2x = g(x))
= 4x + 2g(x) + g(2x + g(x))
But f(f(x) = 6x - f(x)
= 6x - (2x + g(x))
= 4x - g(x)
Equating the two gives 3g(x) + g(2x + g(x)) = 0.
Now substitute 2x + g(x) (i.e., f(x)) for x in the
inequality -2x < g(x) < 4x. You get
-4x - 2g(x) < g(2x = g(x)) < 8x + 7g(x)
adding in 3g(x) to each term and using the equality derived
above gives
-4x + g(x) < 0 < 8x + 7g(x)
This gives g(x) < 4x which was already known, but also gives
-(8/7)x < g(x) which is an improvement over the former bound
-2x < g(x).
Doing the substitution again gave -(8/7)x < g(x) again and
the new bound g(x) < (16/13)x.
In general, in two steps you can go from
-2x < -ax < g(x) < bx < 4x
to
-2x < -ax < -(2b/(3+b))x < g(x) < (4b/(9+b))x < bx < 4x
I didn't get around to setting up the induction to prove
this, and then showing that the convergence is to zero on
both sides (as opposed to, say, converging to something
useless like (-1/1000)x < g(x) < (1/1000)x. You have to
show both descending sequences converge to zero). After
seeing .3, I would do both essentially the same way that I
think he did it there (with f(x) instead of with f(x) - 2x
which I am working on).
************************************************************************
Problem A-6: If a linear transformation A on an n-dimensional vector
space has n + 1 eigenvectors such that any n of them are linearly
independent, does it follow that A is a scalar multiple of the
identity? Prove your answer.
This was the easiest one, I scribbled my answer down on
paper in less than four minutes! :-)
We are working with a vector space over some field, with
operation V + W (addition of vectors) and aV (multiplication
of a vector by a scalar, i.e., by an element of the field).
A is a linear transformation, so A(V) is some other vector
and A(aV+bW) = aA(V) + bA(W). An eigenvector is a vector V
such that A(V) = aV for some scalar a, which is the
corresponding eigenvalue.
So let V0, V1, ..., Vn be the given n+1 eigenvectors, with
eigenvalues a0, a1, ..., an, so that A(vi) = ai Vi for
i = 0, 1, ..., n. We are told that V1, ..., Vn are linearly
independent; since the dimension of the space is n these n
vectors must form a basis and so span the space. So write
V0 as a linear combination of the V1, ..., Vn.
V0 = b1 V1 + ... bn Vn = sum(bi Vi)
In my notation sum(... i ...) will always be over i = 1,
..., n.
Now A(V0) = a0 V0 = sum(a0 bi Vi). But A(V0) = A(sum(bi Vi))
= sum(bi A(Vi)) [since A is linear] = sum(bi ai Vi)
[because A(Vi) = ai Vi]. Equating the two expressions for
A(V0) gives sum(bi ai Vi) = sum(a0 bi Vi). Subtracting
these gives a linear combination of independent vectors
which is the zero vector, and so each coefficient is the
zero scalar, i.e., in the underlying field
bi ai = a0 bi i = 1, ..., n
Thus each ai = a0; let the common value be a = a0 = a1 = ... an.
Now if W is any vector, express it as a combination of V1,
..., Vn to get W = sum(bi Vi). Then A(W) = A(sum(bi Vi)) =
= sum(bi a Vi) = a(sum bi Vi) = aW, i.e. A(W) = aW for every
vector W, and so A is a scalar multiple of the identity.
This doesn't use the full hypothesis of the problem that any
n of V0, V1, ..., Vn are independent; it only used that one
such set is independent. In fact, the whole answer is wrong
because in going from bi ai = a0 bi to ai = a0 I missed the
alternative solution that bi = 0 and ai not= a0.
Oh well.
I'll have to go back and read .3 on this.
---------------------------------------------------------------------------
Problem B-1: A *composite* (positive integer) is a product ab with a
and b not necessarily distinct integers in {2,3,4,...}. Show that
every composite is expressible as xy + xz + yz + 1, with x, y, and z
positive integers.
I tried a few examples and found (x,y,z) for
4 (1,1,1)
6 (2,1,1)
8 (3,1,1)
9 (2,2,1)
10 (4,1,1)
12 (3,2,1)
Here I noticed the pattern that you represent ab as
xy + xz + yz + 1 by letting z = 1, when it reduces to
xy + x + y + 1 = (x + 1)(y + 1).
So let x = a-1, y = b-1, z = 1.
************************************************************************
Problem B-2: Prove or disprove: If x and y are real numbers with y >= 0
and y(y+1) <= (x+1)^2, then y(y-1) <= x^2.
After nineteen fruitless minutes I went on to the next one. :-)
************************************************************************
Problem B-3: For every n in the set Z+ = {1,2,...} of positive
integers, let r(n) be the minimum value of |c-d sqrt(3)| for all
nonnegative integers c and d with c + d = n. Find, with proof, the
smallest positive real number g with r(n) <= g for all n in Z+.
As c runs though 0, 1, ..., n the number c - d sqrt(3)
starts at - n sqrt(3) [less than zero] and is incremented by
1 + sqrt(3) as c is incremented by 1, ending at n when c =
n, d = 0 [and here c - d sqrt(3) = n is greater than zero].
So the minimum value of | c - d sqrt(3) | will either be for
the largest c for which c - d sqrt(3) is still negative or
the smallest c such that it is positive. Let C be the
largest C such that c - d sqrt(3) = c(1 + sqrt(3)) - n sqrt(3)
is negative. Then
0 <= C < n sqrt(3) / (1 + sqrt(3)) < C + 1 <= n
or C = [a n] where a = sqrt(3) / (1 + sqrt(3)) and [x] is
the largest integer <= x.
Now the difference in c - d sqrt(3) from c = C to c = C+1 is
as before, 1 + sqrt(3). So we have two possibilities for r(n),
the distance from zero to the largest negative value (at c = C)
or the distance from zero to the smallest positive value (at
c = C+1). The smaller distance, r(n), is <= half the
distance between the values for c = C and c = C+1, so for
all n, r(n) <= (1 + sqrt(3))/2. Call this upper bound G.
G is the desired value of g, i.e., G = (1 + sqrt(3))/2 is
the smallest positive real number g s.t. r(n) <= g for all
n from 1, 2, 3, ....
I already showed each r(n) <= G. To show that no smaller G'
can have each r(n) <= G' and G' < G, note that any such G'
is less than half of 1 + sqrt(3). Let epsilon = (G - G')/2.
Using a as specified above, for any positive integer n let
C = [an]. Then C < an < C + 1. Suppose an were within
epsilon of C + 1/2. I.e., | C + 1/2 - an | < epsilon. In
that case r(n) would be between G and G - epsilon, i.e., it
would be greater than G'. This would show that G really is
the least upper bound of the r(n). So can such an n be
shown to exist? It would require | (2C + 1)/2n - a | < epsilon/n.
Grumble. Since a is a quadratic root, I'm sure it can be
approximated by rationals p/q with error less than a
constant times q^2. I can't recall the precise result or
how to show it; taking such an approximation with q large
enough so that 1/q^2 << epsilon/q should be sufficient to
finish the proof. Oh well, go on to the next problem.
************************************************************************
Problem B-4: Prove that if sum from n=1 to infinity a(n) is a
convergent series of positive real numbers, then so is
sum from n=1 to infinity (a(n))^(n/(n+1)).
The series summing the a(n) converges if and only if the
sequence of u(n) = sum(i = 0 to n of a(i)) converges, if and
only if [definition of Cauchy sequence] for every positive
epsilon there is an N such that for all n > N, m > N
| u(n) - u(m) | < epsilon
which here reduces to for every positive epsilon there is an
N such that for all n > N, m > N, n < m
a(n+1) + ... + a(m) < epsilon
The series summing the a(n) converges so the sequence of
a(n) must converge to zero. Thus the sequence of 1/a(n)
must diverge to + oo. But what about the sequence of smaller
values (1/a(n))^(1/(n+1))?
Let b(n) and c(n) be given by
b(n) = a(n) if (1/a(n))^(1/(n+1)) <= 2
0 if (1/a(n))^(1/(n+1)) > 2
c(n) = 0 if (1/a(n))^(1/(n+1)) <= 2
a(n) if (1/a(n))^(1/(n+1)) > 2.
Note that for all n
a(n) = b(n) + c(n)
0 <= b(n) <= a(n)
0 <= c(n) <= a(n)
The series summing the a(n) converges to some value A, so
the series summing the b(n) must converge to some B and the
series summing the c(n) must converge to sum C, with
0 <= B <= A, 0 <= C <= A, and A = B + C
Now let the serieses [sp?] a', b', c' be given by
a'(n) = (a(n))^(n/(n+1))
b'(n) = (b(n))^(n/(n+1))
c'(n) = (c(n))^(n/(n+1))
Note that for all n
a'(n) = b'(n) + c'(n)
0 <= b'(n) <= a'(n)
0 <= c'(n) <= a'(n)
Now choose epsilon > 0. I will show that b' and c' converge
by finding separate N [I mean an N for b' and an N for c']
such that n, m > N, n < m imply that
(b'(n+1) + ... b'(m)) < epsilon
(c'(n+1) + ... c'(m)) < epsilon
If (1/a(n))^(1/(n+1)) <= 2, then (a(n))^(1/(n+1)) >= 1/2,
and so a'(n) = (a(n))^(n/(n+1)) >= 1/2^n. Likewise, if
(1/a(n))^(1/(n+1)) > 2 then a'(n) < 1/2^n. Given the
definitions of b and c this means that all of the non-zero
values of b'(n) are >= 1/2^n, and all of the non-zero values
of c'(n) are < 1/2^n. So sum(c'(n)) clearly converges, and
if Nc is chosen large enough so that 1/(2^Nc) < epsilon
then Nc will work for the N sought above.
Now consider b'. Since the sum of the a(n) converges, the
sequence of a(n) must converge to zero, and so there is an
N1 such that for all n > N1, a(n) < 1. For such a(n), it is
the case that a(n) < (a(n))^(n/(n+1)) < (a(n))^(1/(n+1)) < 1.
Choose N2 > N1 such that for all n > N2, m > N2, n < m
(a(n+1) + ... a(m)) < epsilon/2
What is the sum (b'(n+1) + ... + b'(m))? If all of the
b'(i) in the sum are zero, then the sum is zero which is
certainly less than epsilon. So assume that at least one
non-zero term occurs in the sum (b'(n+1) + ... + b'(m)).
Let S be the sum, and i1, ..., ik the indices among n+1,
..., m such the term in b' is non-zero. Then
S = b'(i1) + ... + b'(ik) = a'(i1) + ... a'(ik)
and no term in the sum is zero.
Further, we know that a(i1) + ... + a(ik) < epsilon/2
because the sum of a(n+1) + ... + a(m) < epsilon/2.
So what is the ratio S/(a(i1) + ... a(ik))? The ratio of
a'(i1) to a(i1) is
a'(i1)/a(i1) = ((a(i1))^(n/(n+1)))/a(i1)
= (1/a(i1))^(1/(n+1))
By the definition of the sequence b, (1/a(i1))^(1/(n+1)) <= 2
so a'(i1)/a(i1) is <= 2. The same result holds for all of
i1, ..., ik. So S is a sum of terms of a' each of
which is less than or equal to twice the corresponding term
of a. Therefore S <= 2(a(i1) + ... a(ik)) < epsilon.
So N2 is the sought of N for the specified epsilon which
shows that the series summing b'(n) converges.
Since b' and c' both converge and for all n a'(n) = b'(n) +
c'(n) it follows that a' also converges. So for any
sequence of positive reals a(n) [n = 1, 2, 3, ....] if the
sum of the a(n) converges then so does the sum of (a(n))^(n/(n+1)).
This was a lot easier to describe on paper than to type in.
************************************************************************
Problem B-5: For positive integers n, let M(n) be the 2n + 1 by 2n + 1
skew-symmetric matrix for which each entry in the first n subdiagonals
below the main diagonal is 1 and each of the remaining entries below
the main diagonal is -1. Find, with proof, the rank of M(n).
(According to the definition the rank of a matrix is the largest k
such that there is a k x k submatrix with non-zero determinant.)
One may note that M(1) = ( 0 -1 1 ) and M(2) = ( 0 -1 -1 1 1 )
( 1 0 -1 ) ( 1 0 -1 -1 1 )
( -1 1 0 ) ( 1 1 0 -1 -1 )
( -1 1 1 0 -1 )
( -1 -1 1 1 0 )
The first thing to note is that each M(n) is singular. Let
the determinant of M(n) be A(n). If you multiply each of
the 2n+1 rows of M(n) by -1, you get a matrix -M(n) with
determinant A(n)(-1)^(2n+1) = -A(n). But -M(n) is the
transpose of M(n) [which is skew-symmetric] and so its
determinant is the same as the determinant of M(n), i.e.,
A(n). So A(n) = - A(n) or A(n) = 0, and therefore M(n) is
singular and rank M(n) < 2n + 1.
Now drop the last row and column of M(n) to form the 2n by
2n matrix which I will call m(n). Every diagonal entry of
m(n) is zero, and only the diagonal entries of m(n) are
zero. All of the non-zero entries of m(n) are either 1 or
-1. Now the determinant a(n) of m(n) is a sum of (2n)!
products of elements of m(n). Each product has as its
factors numbers in {-1, 0, 1} and so each product is also
one of -1, 0, or 1. a(n) is the sum over each permutation s
of {1,...,2n} of +/- m(n)[1,s(1)] * m(n)[2,s(2)] * ... * m(n)[2n,s(2n)]
where + is used for even permutations s and - for odd
permutations s. Call s a derangement if s(i) not= i for
all i in {1,...,2n}. Note that the term for permutation s
is 0 if and only if s is a derangement, because if s(i) = i
for any i in {1,...,2n} then m(n)[i,s(i)] = m(n)[i,i] = 0
because it is on the diagonal.
It is a well known result that the number D of permutations of
2n that are derangements is given by
D = (2n!)(1/0! - 1/1! + 1/2! - ... + 1/(2n)!)
= (2n)!/2! - (2n)!/3! + ... + (2n)!/(2n)!
[I love quoting well known results. Only after
everything else on the exam is done do you bother
proving well known results.]
Each term but the last has an uncancelled factor of 2n in
the numerator and so is even; the last term is equal to one
and so is odd. So the number of derangements D is odd and the
number of non-derangements is (2n)! - D is also odd.
So the determinant a(n) is a sum of and odd number of zero
terms and an odd number of terms equal to 1 or -1. This sum
cannot be zero [because you need an equal number of 1's and
-1's to sum to zero which means the amount of both together
is even], so a(n) is not zero, and m(n) is not singular.
So for each n rank M(n) >= 2n but rank M(n) < 2n+1, so it
must be the case that
rank M(n) = 2n
************************************************************************
Problem B-6: Prove that there exist an infinite number of ordered
pairs (a,b) of integers such that for every positive integer t the
number at + b is a triangular number if and only if t is a triangular
number. (The triangular numbers are the t(n) = n(n + 1)/2 with n in
{0,1,2,...}.)
Also in reply .1. I also got the result there that an
infinite number of such pairs (a,b) are given by
a = (2k+1)^2 b = k(k+1)/2
Actually I didn't finish the proof that for such a,b the
number at+b is triangular if and only if t is triangular.
But I did show that t triangular implied at+b triangular.
Maybe I can finish the proof as I type it in.
All numbers under consideration are nonnegative integers.
k(k+1)/2 is an integer because one of k and k+1 must be even.
Lemma: T is triangular if and only if 8T + 1 is an odd square.
I didn't decide to prove this to start, instead I began with
the triangular number T = k(k+1)/2 and noticed that 2T =
k(k+1) and so 2T + 1/4 = k^2 + k + 1/4 = (k + 1/2)^2 or that
8T + 1 = (2k + 1)^2, an odd square. In proving this I would
leave out the fractions because I am only considering
nonnegative integers, so go from T = k(k+1)/2 to 8T = 4k(k+1)
to 8T + 1 = 4k(k+1) + 1 = 4k^2 + 4k + 1 = (2k + 1)^2.
Likewise, the number S such that 8S + 1 = (2m + 1)^2 must
satisfy 8S + 1 = 4m^2 + 4m + 1, or 8S = 4m^2 + 4m = 4m(m+1)
and so S = m(m+1)/2 is triangular.
Okay, now what conditions must a and b satisfy for at+b to
be triangular for every triangular t. Well, enumerate the
triangular numbers t by t(m) = m(m+1)/2 for m = 0,1,2,....
You get that am(m+1)/2 + b is tringular and so eight times
that plus one is an odd square, i.e.,
4am(m+1) + 8b + 1 = (2f(m) + 1)^2
where f(m) is such that 2f(m)+1 is the odd number whose
square is 8(at+b) + 1; it must exist if at+b is to be
triangular. What is f(m) in terms of m? The left hand side
of the above equality is quadratic in m, the right hand side
quadratic in f(m). So it seems that f(m) must roughly
linear in m. Well, try out linear to start. Say that
f(m) = cm + s and see what happens.
(2f(m) + 1)^2 = (2(cm + s) + 1)^2
= (2cm + (2s+1))^2
= 4c^2m^2 + 4cm(2s+1) + (2s+1)^2
= 4am^2 + 4am + 8b+1
Matching the m^2 term it seems that a = c^2. Plug this into
the last line and continue grinding away
= 4c^2m^2 + 4c^2m + (8b+1)
Equate the last two and subtract out 4c^2m^2 to get
4cm(2s+1) + (2s+1)^2 = 4am + (8b+1)
Matching the m term gives a = c(2s + 1). But a = c^2 seems
required by the above; together these imply c = (2s + 1).
Plug that in.
(2s+1)^2 = 8b+1
Oh I recognize that, it says b is a triangular number, in
fact b = s(s+1)/2.
So try a = (2s+1)^2, b = s(s+1)/2.
If t is the triangular number m(m+1)/2 then at+b is
(2s+1)^2 m(m+1)/2 + s(s+1)/2
and 8(at+b) + 1 is
4(2s+1)^2 m(m+1) + 4s(s+1) + 1
and this eventually worked out ot be the odd square
(2(2s+1)m + 2s + 1)^2
[Don't believe me? :-) Expand them both out!]
So for the given a and b, t is triangular implies at+b is
triangular.
However, it remains to be shown that if 8(at+b) + 1 is an
odd square then 8t+1 must itself be an odd square, i.e.,
at+b is triangular implies t is triangular. Then the
proof will be complete.
So try that with a = (2s+1)^2, b = s(s+1)/2.
(2k+1)^2 = 8(at+b)+1 = 8((2s+1)^2 t + s(s+1)/2) + 1
4k^2 + 4k + 1 = 8(4s^2 + 4s + 1)t + 4s(s+1) + 1
4k^2 + 4k = 8(4s^2 + 4s + 1)t + 4s(s+1)
k^2 + k = 2(4s^2 + 4s + 1)t + s(s+1)
((k^2 + k) - (s^2 + s))/2 = (2s + 1)^2 t
Hmm. The left hand side is a difference of two triangular
numbers, the right hand side is a square times t. So if the
left hand side is an odd square then t must also be an odd
square. The left hand side factors to (k+s+1)(k-s)/2. Wait
a second, it is 8t + 1 that I want to be a square, not t.
So isolate t,
(k+s+1)(k-s)/(2 (2s+1)^2) = t
4(k+s+1)(k-s)/(2s+1)^2 + 1 = 8t + 1
Can you grind the left hand side into an odd square?
It is certainly odd, since the odd (2s+1)^2 in the
denominator cannot cancel out the 4 in the numerator.
Anyway, I'll stop here. Have fun. :-)
Dan
|