Title: | Mathematics at DEC |
Moderator: | RUSURE::EDP |
Created: | Mon Feb 03 1986 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 2083 |
Total number of notes: | 14613 |
From: ROLL::USENET "USENET Newsgroup Distributor" 23-OCT-1984 23:16 To: HARE::STAN Subj: USENET net.math newsgroup articles Newsgroups: net.math Path: decwrl!decvax!dartvax!chuck Subject: Re: Some Math problems Posted: Tue Oct 23 00:42:09 1984 <Hi! I'm Judy. Fly me to Hawaii!> >1.Given 4 points on a plane, (3,0,0), (0,0,0), (3,0,5) and (0,0,5), > (i.e. a plane on the xz plane), how do you determine the equation > of that plane??? I'm not sure I understand this question. Let me assume that you have been given a number of points (purportedly all lying in the same plane) and you want to discover an equation for this plane. First, since a plane is determined by 3 points, choose 3 points from the set of points that you have been given which are not colinear. (If all triplets of points are colinear, then all your points lie on the same line and you won't be able to find a unique equation for the plane.) [3 points are colinear if a linear combination of them adds up to zero. Let our 3 selected points be P1=(x1,y1,z1), P2=(x2,y2,z2), and P3=(x3,y3,z3). If there exist some a,b,c in R such that aP1+bP2+cP3=0, then the points are colinear. So if we can solve the system of equations: ax1+bx2+cx3 = 0 ay1+by2+cy3 = 0 az1+bz2+cz3 = 0 then the points are colinear.] The equation of a plane is given by ax+by+cz+d=0. So we need to solve the system of equations: ax1+by1+cz1+d=0 ax2+by2+cz2+d=0 ax3+by3+cz3+d=0 The astute reader will notice there will usually be more than one solution. Presumably these solutions will be scalar multiples of each other. Example: Given points (3,0,0), (0,0,0), (3,0,5) and (0,0,5), determine the equation of the plane on which they lie. Choose (3,0,0), (0,0,0), and (3,0,5) as 3 points that determine the plane. Then, we solve: 3a+0b+0c+d=0 0a+0b+0c+d=0 3a+0b+5c+d=0 This gives a=c=d=0. Choose b=1 and the desired equation pops out as y=0. >2. Given a matrix A where > > .68 -.6928 .24 0 > .6928 .5 -.52 0 > A = .24 .52 .82 0 > 0 0 0 1 > >find a transformation B such that BB=A. This one seems a lot harder. Does anyone know if there is a general method of taking the square-root of a square matrix? One approach that I'm investigating uses the Newton-Raphson method. This method appears to have promise. I used it on a 2x2 matrix and came up with an answer. Our Honeywell went to bed for the night before I could try my algorithm out on the above matrix. The Newton-Raphson method is used to take the square root of a real number. Suppose we want to know the square root of X. We compute successive approximations to the square root. These approximations converge rather quickly to the square root. The following pl1 routine implements the Newton-Raphson method. sqrt: procedure (x, epsilon) returns (float); declare x float, /* number to find root of */ epsilon float, /* how close we need to get */ guess float; /* our approximation */ guess = 1; /* initial approximation */ do while (abs (x-guess*guess) > epsilon); /* loop til we get close */ guess = (guess + x/guess) / 2; /* get a new guess */ end; return (guess); end sqrt; I am attempting to approximate the square root of a matrix using this algorithm but using a square matrix instead of a floating point number. I don't really know that my sequence of approximations will converge for matrices. Also, there is the possibility that you may run into an approximation which is non-invertible. Anyway... I hope this hasn't been too tedious. -- Chuck dartvax!chuck
T.R | Title | User | Personal Name | Date | Lines |
---|---|---|---|---|---|
171.1 | FELIX::JANZEN | Wed Oct 24 1984 14:24 | 26 | ||
Give me a break. Use Vectors. points A_,B_,C_,D_ vectors (B_-A_)X(C_-A_)*(N_-A_)=0 X is cross-product * is dot product First what I did there is normalize the vectors to one another (although this isn't necessary here because the origin is in the plane by definition) for the general case where the origin may not be in our plane. thus, B-A, and C-A. Second, Cross these two vectors to get a vector perpindicular to the plane of the two vectors chosen. Third, dot this with the variable vector normalized to the origin to define all the points perpendicular to the vector, setting it to zero. This defines a plane of points which lie in a plane perpendicular to a vector that is perpendicaular to the plane. Aren't vectors wonderful? ((0,0,0)-(3,0,0))X((3,0,5)-(3,0,0))*((N1,N2,N3)-(3,0,0))=0 The dot product of perpendicular vectors is equal to zero. (-3,0,0)X(0,0,5)*((N1,N2,N3)-(3,0,0))=0 (0,-15,0)*((N1,N2,N3)-(3,0,0))=0 (0,-15,0)*(N1-3,N2-0,N3-0)=0 (0,-15,0)*(N1-3,N2,N3)=0 -15N2=0 THEN N2 must be equal to 0, son of a gun, the X-Z plane!!! Y is always zero! How 'bout that! TOm | |||||
171.2 | HARE::STAN | Wed Oct 24 1984 23:08 | 30 | ||
From: ROLL::USENET "USENET Newsgroup Distributor" 24-OCT-1984 22:07 To: HARE::STAN Subj: USENET net.math newsgroup articles Newsgroups: net.math Path: decwrl!decvax!dartvax!chuck Subject: Re: Some Math problems Posted: Tue Oct 23 03:20:22 1984 <You little sqrt.> >2. Given a matrix A where > > .68 -.6928 .24 0 > .6928 .5 -.52 0 > A = .24 .52 .82 0 > 0 0 0 1 > >find a transformation B such that BB=A. Using the Newton-Raphson method described earlier, I came up with .914249 -.399978 .0642701 0 .399978 .866079 -.300184 0 B = .0642701 .300184 .95183 0 0 0 0 1 -- Chuck dartvax!chuck | |||||
171.3 | TURTLE::GILBERT | Thu Oct 25 1984 20:29 | 14 | ||
The following method can be used to find the "square root" of a square matrix. Perform an LRU decomposition of the matrix, find the "square roots" of the parts of the decomposition, and take the product of these "square roots", to get the "square root" of the original matrix. Note that finding the "square root" (y) of an upper diagonal matrix (x) is fairly easy: y[i,i] = sqrt(x[i,i]) y[i,j] = 0, if i > j And the remaining elements of y can be found by back-subtitution, and solving a set of equations that are linear in the remaining y[i,j]. | |||||
171.4 | HARE::STAN | Wed Oct 31 1984 22:32 | 32 | ||
Newsgroups: net.math Path: decwrl!decvax!genrad!mit-eddie!godot!harvard!wjh12!foxvax1!brunix!browngr!jfh Subject: Re: Some Math problems Posted: Sun Oct 28 20:48:16 1984 There's a general method for finding the square root of a symmetric matrix: Every real symmetric matrix has real non-negative eigenvalues (see the definition of eigenvalue in any linear algebra book). Let the eigenvalues be a1, a2, ..., an, and corresponding unit eigenvectors be v1, v2, ..., vn (this means that: M vn = an vn (here M is the matrix in question). Now line up all the eigenvectors in a matrix, each eigenvector being a column, and call the matrix Q. Then Qt M Q is a diagonal matrix D (with the eigenvalues of M down the diagonal) (here Qt denotes the transpose of Q). Let E be the square root of D (i.e. take the square root of each entry in D). Then a square root of M is given by Q E Qt This method also works for non-symmetric matrices, if they have all positive eigenvalues, and all are real, but you must use Q inverse rather than Q transpose (alas). There is also a polar coordinate decomposition of a matrix into the product of a real symmetric matrix R and a unitary matrix U (in the case of a 2*2 complex matrix, these correspond to r and theta for polar coordinates in the plane), and you can do something from there, but you probably don't wnat to...\ | |||||
171.5 | HARE::STAN | Wed Oct 31 1984 22:32 | 64 | ||
Newsgroups: net.math Path: decwrl!decvax!harpo!whuxlm!whuxl!houxm!ihnp4!inuxc!pur-ee!uiucdcsb!robison Subject: Re: Some Math problems Posted: Sun Oct 28 12:39:00 1984 Nf-ID: #R:ihuxt:-68900:uiucdcsb:9700030:000:2041 Nf-From: uiucdcsb!robison Oct 28 14:39:00 1984 I originally sent the answer to question 2 directly to Barry. Since no-one else posted an answer, I'll "reprint" mine below: ------------------------------------------------------------------------- - 2. Given a matrix A where - - - .68 -.6928 .24 0 - .6928 .5 -.52 0 - A = .24 .52 .82 0 - 0 0 0 1 - - find a transformation B such that BB=A. This is the problem of finding the square-root of matrix A. (Notation: "*" means multiplication, "**" means exponentiation. I wish I could have superscripts/subscripts or a least an APL character set!) First find the matrix Z, such that L = (1/Z)*A*Z, where L is a diagonal matrix: l1 0 0 0 L = 0 l2 0 0 0 0 l3 0 0 0 0 l4 Then A = Z*L*(1/Z) and B = Z*(L**.5)*(1/Z). (Why the answer for B? Try multiplying out BB.) Since L is a diagonal matrix, its square-root is trivially: l1**.5 0 0 0 0 l2**.5 0 0 0 0 l3**.5 0 0 0 0 l4**.5 The values l1,l2,l3, and l4 are the eigenvalues of A. The columns of Z are the eigenvectors of A. Methods for finding eigenvectors and eigenvalues may be found in a linear algebra textbook. The eigenstuff has many other uses. You can, for instance, take the "tangent" of a matrix by finding L and Z, then taking the tangent of each element of L, and then transforming back in the same manner as for finding the square-root. (This works for any function with a McLauren expansion). One neat application involves taking e raised to a matrix power. One can take the formula for solving a single linear differential equation and use it for solving a system of linear differential equations by substituting matrix exponentiation for scalar exponentiation. ----------------------------------------------------------------------------- Added note: the Newton-Ralphson method probably only works for matrices with positive real determinants, but I can't prove this. - Arch @ uiucdcs | |||||
171.6 | HARE::STAN | Wed Oct 31 1984 22:32 | 21 | ||
From: ROLL::USENET "USENET Newsgroup Distributor" 31-OCT-1984 22:27 To: HARE::STAN Subj: USENET net.math newsgroup articles Newsgroups: net.math Path: decwrl!decvax!harpo!whuxlm!whuxl!houxm!ihnp4!allegra!ask Subject: Re: Some math questions Posted: Tue Oct 30 08:28:28 1984 A correction to the note of ...browngr!jfh regarding square root of a matrix: A real symmetric matrix has real, but not necessarily non-negative eigenvalues. The square root algorithm given by jfh requires that A be positive semi-definite. It is easy to see why it must be so. The eigenvalues of A are the squares of those of B and hence must be non-negative (this assumes, of course, that we are restricting ourselves to real matrices). With this proviso, the algorithm is correct. Krishna | |||||
171.7 | HARE::STAN | Wed Oct 31 1984 22:32 | 14 | ||
Newsgroups: net.math Path: decwrl!decvax!genrad!mit-eddie!godot!harvard!seismo!brl-tgr!gwyn Subject: Re: Some Math problems Posted: Mon Oct 29 18:15:52 1984 > Q E Qt > > This method also works for non-symmetric matrices, if they have > all positive eigenvalues, and all are real, but you must use > Q inverse rather than Q transpose (alas). In the original query, the matrix was orthogonal, so its inverse WAS its transpose. | |||||
171.8 | HARE::STAN | Fri Nov 02 1984 22:10 | 29 | ||
From: ROLL::USENET "USENET Newsgroup Distributor" 2-NOV-1984 22:07 To: HARE::STAN Subj: USENET net.math newsgroup articles Newsgroups: net.math Path: decwrl!CSL-Vax!Navajo!worley Subject: Re: Some Math problems Posted: Wed Oct 31 15:12:17 1984 > There's a general method for finding the square root of > a symmetric matrix: Every real symmetric matrix has > real non-negative eigenvalues ... (-2) is a (trivial) 1X1 real symmetric matrix which does not have non-negative eigenvalues. You need the matrix to be real symmetric positive definite (i.e. explicitly require that the eigenvalues be positive) in order to guarantee that B = Q(sqrt(D))Q(t) A = BB will have a non-complex B. > This method also works for non-symmetric matrices, if they have > all positive eigenvalues, and all are real, but you must use > Q inverse rather than Q transpose (alas). You also need a full set of eigenvectors, which don't always exist for a nonsymmetric matrix. |