![]() |
Tensor Analysis books
Wich book is the best to initiate to study tensor analysis? I have Levi-Civita "The absolute differential calculus" but is to abstract to me. I would prefer one with more numerical examples and graphics if possible.
Thanks in advance, Damian. |
We used [url=http://www.amazon.com/gp/product/0387976639/ref=pd_cp_b_title/104-8532151-6379916]this one[/url] in my graduate-level differential geometry class, but I suspect you might also find it somewhat "abstract" for your taste, even though Fomenko and Novikov are well-known relativity physicists and try to keep things "physically grounded" whenever possible.
If differential geometry were easy, general relativity would be a high school subject. |
[QUOTE=ewmayer;93850]We used [url=http://www.amazon.com/gp/product/0387976639/ref=pd_cp_b_title/104-8532151-6379916]this one[/url] in my graduate-level differential geometry class, but I suspect you might also find it somewhat "abstract" for your taste, even though Fomenko and Novikov are well-known relativity physicists and try to keep things "physically grounded" whenever possible.
If differential geometry were easy, general relativity would be a high school subject.[/QUOTE]I very much like Misner, Thorne & Wheeler [i]Gravitation[/i] but that text is very much more than a book on tensor calculus. It does have a lot of pretty pictures and sound physical interpretations of objects which are often treated as very abstract mathematical constructs. Paul |
Tensor Analysis
:rolleyes:
Damian, I'm not in the league with Ewmayer or Xilman nor I can I ever measure up to them but I would advise the Schaum's outline series of Theory and problems of 'Vector Analysis and an introduction to Tensor Analysis' There are about 60 pages at the end devoted to Tensor Analysis and I think sufficient to move on to perhaps those recommended by our colleagues. It has 480 solved problems so you can get a better idea on the subject. It is by Murray R. Spiegel, PhD. This is widely available in the U.S. libraries and in New York I can assure you as I picked my copy on sale from them for a throwaway price. My copy is collecting dust on my shelves . I do not profess to have gone thru or even understood it, but I know a good book when I see one . Mally :coffee: |
thanks for the replys. I'm downloading Thorne and Wheeler Gravitation book.
A newbie question: Does covariant and contravariant concept has anything to do with the transpose of a vector? I ask because I see that a_i*b^i gives the dot product (that is the same as the product of a vector matrix with the traspose of the other vector matrix), and it also the same result as the contraction of tensors (the summation convention) Another question: how can I use tex tags in these posts? Thanks in advance Damian. |
High Brow.
:smile:
I may be terribly wrong Damian. I think you are jumping the gun, but it all depends on your level. I knew a buddy of mine who was studying this book on gravitation for his PhD thesis. As xilman says it is more than just Tensor calculus and as ewmayer says about differential geometry. The climb to Tensors is long and tedious and requires a good foundation in modern geometry. This sounds simple but I dont want to discourage you though. The covariant curvature tensor is of fundamental importance in Einstein's general theory of Relativity. The contravariance has to do with curvilinear co-ordinate systems. They are both related with the latter coming before the former. All the best, Mally :coffee: |
[quote]Another question: how can I use tex tags in these posts?[/quote]
[URL]http://www.mersenneforum.org/showthread.php?t=4576[/URL] |
Thanks,
what I ment was: if I have two tensors [tex]A[/tex] and [tex]B[/tex], then the tensor contraction [tex] A_i B^i [/tex] equals the dot product of two vectors, wich itself equals the product of column vector matrix [tex]A^t[/tex] with file vector [tex]B[/tex] Is this casual, or there is a connection between covariance/contravariance and transpose of matrices. I guess the answer is that is casual, because I can have a rank 3 tensor, and how would I define its transpose since it is "similar" to a three dimensional matrix. Thanks, Damian. |
[QUOTE=Damian;94059]Thanks,
what I ment was: if I have two tensors [tex]A[/tex] and [tex]B[/tex], then the tensor contraction [tex] A_i B^i [/tex] equals the dot product of two vectors, wich itself equals the product of column vector matrix [tex]A^t[/tex] with file vector [tex]B[/tex] Is this casual, or there is a connection between covariance/contravariance and transpose of matrices. I guess the answer is that is casual, because I can have a rank 3 tensor, and how would I define its transpose since it is "similar" to a three dimensional matrix. Thanks, Damian.[/QUOTE]|In the special case of the Euclidean metric (Lorenz metric in GR) and Cartesian coordinates, the process of raising indices is the same as transposition. This is because the metric tensor, g, has an especially simple form --- the Euclidean metric is just the identity matrix and the Lorenz metric is the Euclidean metric with a single sign change in the x_0 component. Paul |
[QUOTE=xilman;94062]|In the special case of the Euclidean metric (Lorenz metric in GR) and Cartesian coordinates, the process of raising indices is the same as transposition. This is because the metric tensor, g, has an especially simple form --- the Euclidean metric is just the identity matrix and the Lorenz metric is the Euclidean metric with a single sign change in the x_0 component.
Paul[/QUOTE] Ok, but take for example this tensor formula [tex]A_{ij}x^iy^j [/tex] The vector related formula would be [tex] x^t A y [/tex] Because it would be inconsistent to write: [tex]A x^t y^t [/tex] Why does that happen? (have to transpose only one variable and put it before the matrix?) |
[QUOTE=Damian;94076]Ok, but take for example this tensor formula
[tex]A_{ij}x^iy^j [/tex] The vector related formula would be [tex] x^t A y [/tex] Because it would be inconsistent to write: [tex]A x^t y^t [/tex] Why does that happen? (have to transpose only one variable and put it before the matrix?)[/QUOTE] The difference is that matrix-vector multiply has conventions about how to loop over the rows and columns, which, to get a scalar result from the vector-vector product of 2 length-n vectors (one row, one column) x and y (which could give either an nxn result or a 1x1, i.e. a scalar, depending on the order of the operands) make it necessary to have the row vector on the left of the product and the column vector on the right. If one's convention is that vectors without transpose superscripts denote column vector, that means x^t y gives a scalar. Similary, for a 3-way product of x, y and nxn matrix A to yield a scalar, one must order things as (row vector)*A*(column vector). In your example, x^t A y is the only way for a matrix-vector product of A, x^t (row vector), and y (column vector) to both be well-defined and yield a scalar result. Note that even though matrix multiply does not commute in general, it *is* associative, i.e. you could first calculate (x^t A) and right-multiply the resulting row vector with y, or first calculate (A y) and then left-multiply the resulting column vector with x^t; in either case the result is the same scalar. The tensor index notation replaces this row/column-based convention with a different one, based on implied summation over a repeated index. This leads to a less visually intuitive procedure than above, but again, it is unambiguous and (at least for vectors and matrices) completely equivalent to conventional matrix multiply. In A_{ij}x^iy^j, the fact that you can either do the index sum over i (equivalent to x^t A) or j (== A y) first simply reflects the associativity of matrix multiply. |
| All times are UTC. The time now is 23:31. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.