Computing the Laplacian in Polar Coordinates

Similar questions have been asked on this site but none of them seemed to help me. I’m asked to compute the Laplacian $ $ \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$ $ in terms of polar coordinates.

I did do it, but I don’t understand why what I did is correct, and I don’t understand the more “brute force” way to do it at all.

Here is what I did:

I calculated $ \frac{\partial}{\partial r}$ and $ \frac{\partial}{\partial \theta}$ in terms of $ r,$ $ \theta,$ $ \frac{\partial}{\partial x}$ and $ \frac{\partial}{\partial y}.$ This gave me a system of linear equations which I wrote as $ $ \begin{pmatrix}\frac{\partial}{\partial r} \\frac{\partial}{\partial \theta}\end{pmatrix} = \begin{pmatrix}\cos\theta & \sin\theta \\ -r\sin\theta & r\cos\theta \end{pmatrix}\begin{pmatrix}\frac{\partial}{\partial x} \\frac{\partial}{\partial y}\end{pmatrix}.$ $ I inverted to get $ $ \begin{pmatrix}\frac{\partial}{\partial x} \\frac{\partial}{\partial y}\end{pmatrix} = \frac{1}{r}\begin{pmatrix}r\cos\theta & -\sin\theta \\ r\sin\theta & \cos\theta \end{pmatrix}\begin{pmatrix}\frac{\partial}{\partial r} \\frac{\partial}{\partial \theta}\end{pmatrix},$ $ and then I simply wrote \begin{align}\frac{\partial^2}{\partial x^2} &= \frac{\partial}{\partial x} \left( \frac{\partial}{\partial x}\right)\ &= \left(\cos\theta \frac{\partial}{\partial r}-\frac{1}{r}\sin\theta\frac{\partial}{\partial \theta}\right)\left(\cos\theta \frac{\partial}{\partial r}-\frac{1}{r}\sin\theta\frac{\partial}{\partial \theta}\right)\ &= \cos\theta\frac{\partial}{\partial r}\left(\cos\theta \frac{\partial}{\partial r}-\frac{1}{r}\sin\theta\frac{\partial}{\partial \theta}\right) -\frac{1}{r}\sin\theta\frac{\partial}{\partial \theta}\left(\cos\theta \frac{\partial}{\partial r}-\frac{1}{r}\sin\theta\frac{\partial}{\partial \theta}\right)\ &= \cos^2\theta \frac{\partial^2}{\partial r^2} – \frac{2}{r}\sin\theta\cos\theta\frac{\partial^2}{\partial r \partial\theta} +\frac{1}{r^2}\sin^2\theta\frac{\partial^2}{\partial \theta^2}.\end{align} I similarly got $ $ \frac{\partial^2}{\partial y^2} = \sin^2\theta \frac{\partial^2}{\partial r^2} + \frac{2}{r}\sin\theta\cos\theta\frac{\partial^2}{\partial r \partial\theta} +\frac{1}{r^2}\cos^2\theta\frac{\partial^2}{\partial \theta^2}.$ $ Adding the two yields $ $ \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2} = \frac{\partial^2}{\partial r^2}+\frac{1}{r^2}\frac{\partial^2}{\partial \theta^2},$ $ which Spivak says is correct.

Explicitly, here are my questions:

  1. In my solution, when I found $ \frac{\partial^2}{\partial x^2},$ I simply “multiplied” the expressions in the second line of the large aligned equation (treating multiplication of the partial operators as composition). Why am I allowed to do this? Why does the expression on the left not act on the thing on the right, forcing me to do the product rule and other nonsense to get the answer?

  2. My original idea was just to compute the Laplacian using the chain rule. That is, write $ \frac{\partial}{\partial x}=\frac{\partial}{\partial r} \frac{\partial r}{\partial x}$ and compute $ \frac{\partial^2}{\partial x^2}$ from there. My problem with this is that I keep getting confused about in which variables I should be writing everything, and how the partial derivative operators act on these other expressions. For example, I compute $ \frac{\partial}{\partial x}=\frac{\partial}{\partial r} \frac{\partial r}{\partial x} + \frac{\partial}{\partial \theta} \frac{\partial \theta}{\partial x} = 2x \frac{\partial}{\partial r} – \frac{y}{x^2+y^2} \frac{\partial}{\partial \theta},$ but then I don’t know where to go. Help understanding this method would be greatly appreciated.

If anything is unclear let me know and I’ll make the necessary edits.

Return coordinates that make up Google Maps path.

I was working on a project in Android Studio(using Java) and used this tutorial to embed Google Maps into my app: https://www.youtube.com/watch?v=CCZPUeY94MU

This tutorial enables me to type in a location and destination and find a route between the two. For the purposes of my project, I want to have a list of the coordinates that make up the path; “every” point that the car would travel through.

I am fairly new to coding–if there is missing information please feel free to ask me another question.

Thank You.

how to calculate the values of a 1 mile radius with 2 coordinates

I am trying to calculate how much of an increase in latitude and longitude when looking for a radius of one mile given a specific coordinate. I am using the haversine formula and am trying to find a way to reverse engineer the equation to solve for the increase in values from longitude and latitude of 0,0 and see the longitude and latitude increases of 1 mile.

I am stuck on reverse engineering the solution to find the second coordinates and I am currently stuck on how to approach this problem

here is the Link for haversine formula

Grad in polar coordinates

As part of my lectures, it is noted that $ \nabla \sin \theta \propto \frac{1}{r^2}$ and $ \nabla \phi \propto \frac{1}{r^2}$ where we are working in spherical polar coordinates with $ \theta$ as the polar angle and $ \phi$ as the azimuthal (so the physics convention)…

Now I can’t seem to see why this is true. I’ve tried $ $ \nabla \sin \theta = \frac{\partial}{\partial r} (\sin \theta) + \frac{\partial}{\partial \theta} (\sin \theta) + \frac{\partial}{\partial \phi} (\sin \theta)$ $ but I can’t see how a $ \frac{1}{r^2}$ is going to come out of this.

I’ve also tried to work with grad in spherical polars but I still can’t seem to get the $ \frac{1}{r^2}$ , likewise for $ \nabla \phi$

Help would be appreciated….

Mouse coordinates to grid coordinates within an isometric map

I wrote an algorithm to transform mouse screen coordinates within an isometric map to corresponding coordinates of an underlying 2D-square grid. The algorithm works and I tested it successfully. I wanted to post it here so interested people could maybe check the algorithm and see if I can make any optimizations beyond of what I did already. My goal is to make the algorithm as compact as possible. There maybe are some smart mathematical solutions I didn’t think of to save some lines of code or make it faster.

The input data for the rendering of the map is a simple 2D-array like:

0,0,0,0  0,0,0,0  0,0,0,0  ... 

Here is a screenshot of what the map looks like when rendered. It is not really a map right now but more of an outlining of the tiles I will render. The red coordinates are x and y values of the underlying source grid.

enter image description here

It is rendered with a zig-zag approach with an origin outside of the screen (top left). It is rendered row by row from left to right whereby every uneven row is inset by half a tiles width. This is a common approach to avoid having to fill the source 2D-grid with “zombie-data” if one would render it in a diamond approach.

What the algorithm does is:

  • the screen is logically divided into square rectangles with a width of my tile width and height of my tile height, i.e. 128px by 64px
  • the mouse coordinates are scaled by tile width and tile height and then floored. This determines in which screen rectangle the user clicked
  • then the mouse coordinates are compared against the center coordinates of this rectangle
  • regarding this result, a right angled triangle is calculated to check if the user clicked within a certain area of the rectangle. I am using the cosine of this triangle to compare against my initial rotation angle of 26,565 (atan of 0.5). This is the angle in which the sides of the rendered rhombi are rotated to achieve a 2:1 ratio of width and height (classic isometric projection for video games).
  • regarding on where the user clicked
    • the y value is determined from a lower and upper boundary
    • the x value is determined based on y being even or uneven

To run this code, you will need SDL2-2.0.9 and boost 1.67.0. to run. I am using MS Visual Studio Community 2017 as an IDE.

#include <iostream> #include <vector> #include <SDL.h> #include <SDL_Image.h> #include <boost/cast.hpp>  struct Point {   std::size_t x;   std::size_t y; };  Point mouseToGrid(int pMouseXPos, int pMouseYPos, std::size_t pTileWidth, std::size_t pTileHeight, const double PI, const int SCREEN_WIDTH, const int SCREEN_HEIGHT) {   double mouseXPos = boost::numeric_cast<double>(pMouseXPos);   double mouseYPos = boost::numeric_cast<double>(pMouseYPos);   double tileWidth = boost::numeric_cast<double>(pTileWidth);   double tileHeight = boost::numeric_cast<double>(pTileHeight);   double tileWidthHalf = tileWidth / 2;   double tileHeightHalf = tileHeight / 2;    double mouseTileYPos = mouseYPos / tileHeight;   mouseTileYPos = std::floor(mouseTileYPos);    int screenRectCenterX = ((std::floor((mouseXPos / tileWidth))) * tileWidth) + tileWidthHalf;   int screenRectCenterY = (mouseTileYPos * tileHeight) + tileHeightHalf;    //determine lower and upper boundary for y   int minY = boost::numeric_cast<int>(2 * mouseTileYPos);   int maxY = boost::numeric_cast<int>((2 * mouseTileYPos) + 1);   if (mouseYPos >= screenRectCenterY)   {     minY = maxY;     maxY++;   }    //calc triangle sides in pixels   char mouseRectangleSector[2]{};   double opposite;   double adjacent;    if (mouseYPos >= screenRectCenterY)   {     mouseRectangleSector[0] = 'S';     opposite = mouseYPos - screenRectCenterY;   }   else   {     mouseRectangleSector[0] = 'N';     opposite = screenRectCenterY - mouseYPos;   }   if (mouseXPos >= screenRectCenterX)   {     mouseRectangleSector[1] = 'E';     adjacent = (screenRectCenterX + tileWidthHalf) - mouseXPos;   }   else{     mouseRectangleSector[1] = 'W';     adjacent = tileWidthHalf - (screenRectCenterX - mouseXPos);   }    double hypothenuse = std::sqrt(std::pow(opposite, 2) + std::pow(adjacent, 2));    //calculate cos and corresponding angle in rad and deg   double cos = adjacent / hypothenuse;   double angleRad = std::acos(cos);   double angleDeg = angleRad * 180 / PI;   //calculate initial rotation angle in rad and deg   double controlAtan = 0.5;   double controlAngleRad = std::atan(controlAtan);   double controlAngleDeg = controlAngleRad * 180 / PI;    //determine final position for y   if (mouseRectangleSector[0] == 'S')   {     if (angleRad > controlAngleRad)     {       mouseTileYPos = maxY;     }     else     {       mouseTileYPos = minY;     }   }   else   {     if (angleRad < controlAngleRad)     {       mouseTileYPos = maxY;     }     else     {       mouseTileYPos = minY;     }   }    //determine position for x   double mouseTileXPos;   if ((boost::numeric_cast<int>(mouseTileYPos)) % 2 == 0)   {     mouseTileXPos = (mouseXPos + tileWidthHalf) / tileWidth;   }   else   {     mouseTileXPos = mouseXPos / tileWidth;   }   mouseTileXPos = std::floor(mouseTileXPos);    Point gridXY{(std::size_t)mouseTileXPos, (std::size_t)mouseTileYPos};    return gridXY; }  int main(int argc, char *args[]) {   const double PI = 3.1415926535897932384626433832795;   const int SCREEN_WIDTH = 1600;   const int SCREEN_HEIGHT = 900;    //init SDL Components   if (SDL_Init(SDL_INIT_EVERYTHING) != 0)   {     return 1;   }    SDL_Window *sdlWindow = SDL_CreateWindow("A New Era", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);   SDL_Renderer *sdlRenderer = SDL_CreateRenderer(sdlWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);    if (sdlWindow == nullptr)   {     SDL_DestroyWindow(sdlWindow);     SDL_Quit();     return 1;   }    if (sdlRenderer == nullptr)   {     SDL_DestroyWindow(sdlWindow);     SDL_DestroyRenderer(sdlRenderer);     SDL_Quit();     return 1;   }    if (!(IMG_Init(IMG_INIT_PNG)))   {     SDL_DestroyWindow(sdlWindow);     SDL_DestroyRenderer(sdlRenderer);     SDL_Quit();     return 1;   }    SDL_SetRenderDrawColor(sdlRenderer, 0, 0, 0, 255);   SDL_RenderClear(sdlRenderer);   SDL_SetRenderDrawColor(sdlRenderer, 255, 255, 255, SDL_ALPHA_OPAQUE);    //tile dimensions   std::size_t tileWidth = 128;   std::size_t tileHeight = 64;   std::size_t tileWidthHalf = tileWidth / 2;   std::size_t tileHeightHalf = tileHeight / 2;   int originX = 0;   int originY = 0;   int xScreenOffset;   int yScreenOffset = 0 - tileHeightHalf;    //add pseudo data points to tile vector   std::vector<Point> mapTiles{};    for (std::size_t y = 0; y < 10; y++)   {     for (std::size_t x = 0; x < 10; x++)     {       Point point{x, y};       mapTiles.push_back(point);     }   }    for (std::vector<Point>::iterator itPoint = mapTiles.begin(); itPoint < mapTiles.end(); itPoint++)   {     if (itPoint->y % 2 == 0)     {       xScreenOffset = 0;     }     else     {       xScreenOffset = tileWidthHalf;     }      //draw 2:1 rombus     std::size_t tileOriginX = itPoint->x * tileWidth;     std::size_t tileOriginY = itPoint->y * tileHeightHalf;     std::size_t x1 = tileOriginX + tileWidthHalf;     std::size_t y1 = tileOriginY + tileHeightHalf;     SDL_RenderDrawLine(sdlRenderer, tileOriginX + originX + xScreenOffset, tileOriginY + originY + yScreenOffset, x1 + originX + xScreenOffset, y1 + originY + yScreenOffset);     std::size_t x = x1;     std::size_t y = y1;     x1 = x - tileWidthHalf;     y1 = y + tileHeightHalf;     SDL_RenderDrawLine(sdlRenderer, x + originX + xScreenOffset, y + originY + yScreenOffset, x1 + originX + xScreenOffset, y1 + originY + yScreenOffset);     x = x1;     y = y1;     x1 = x - tileWidthHalf;     y1 = y - tileHeightHalf;     SDL_RenderDrawLine(sdlRenderer, x + originX + xScreenOffset, y + originY + yScreenOffset, x1 + originX + xScreenOffset, y1 + originY + yScreenOffset);     x = x1;     y = y1;     x1 = tileOriginX;     y1 = tileOriginY;     SDL_RenderDrawLine(sdlRenderer, x + originX + xScreenOffset, y + originY + yScreenOffset, x1 + originX + xScreenOffset, y1 + originY + yScreenOffset);   }   SDL_RenderPresent(sdlRenderer);    //game loop   //control variables   SDL_Event event;   bool quit = false;    //originX = originX - tileWidthHalf;   while (!quit)   {     while (SDL_PollEvent(&event) != 0)     {       if (event.type == SDL_QUIT)       {         quit = true;       }       else if (event.type == SDL_MOUSEBUTTONDOWN)       {         if (event.button.button == SDL_BUTTON_LEFT)         {           int mouseXPos;           int mouseYPos;           SDL_GetMouseState(&mouseXPos, &mouseYPos);            Point gridCoordinates = mouseToGrid(mouseXPos, mouseYPos, tileWidth, tileHeight, PI, SCREEN_WIDTH, SCREEN_HEIGHT);           std::cout << "x,y : " << gridCoordinates.x << "," << gridCoordinates.y << std::endl;         }       }     }   }    SDL_DestroyWindow(sdlWindow);   SDL_DestroyRenderer(sdlRenderer);   SDL_Quit();   return 0; } 

As I said, the algorithm works so I don’t need help getting it to run. I am looking for help or advice on how to make it more compact (reduce lines of code). Maybe I can get rid of a couple of if statements by doing some more maths. This is really the first time after my graduation I used this kind of maths in programming so there might be some tweaks that can be done.

Rational coordinates

If the vertices of a triangle having the rational coordinates then which triangle centers are having rational coordinates

We are familiar with centroid is obviously having rational coordinates Then how can we verify and prove incenter, orthocenter, circumvented nine point center are having rational coordinates,

conformal coordinates on a Riemann surface

Let $ \Sigma$ be a Riemann surface with complex structure $ j$ and a volume form $ dvol_\Sigma$ . I read somewhere that one can take the so-called ‘conformal coordinates’ $ z=s+it$ so that $ j\partial_s = \partial_t$ and $ dvol_\Sigma = f^2(ds\wedge dt)$ for some function $ f$ .

Question: When can we find a conformal coordinate so that locally $ dvol_\Sigma =ds\wedge dt$ ?

XMVector3Project is giving offset values when trying to convert world space coordinates to screen space coordinates

I am trying to convert world space coordinates to screen space coordinates for a 2D game so I can work on collisions with the edges of the window.

enter image description here

The code I am using to convert world space coordinates to screen space is below

float centerX = boundingBox.m_center.x; XMVECTORF32 centerVector = { centerX, 0.0f, 0.0f, 0.0f };   Windows::Foundation::Size screenSize = m_deviceResources->GetLogicalSize(); //the matrices are passed in transposed, so reverse it DirectX::XMMATRIX projection = XMMatrixTranspose(XMLoadFloat4x4(&m_projectionMatrix)); DirectX::XMMATRIX view = XMMatrixTranspose(XMLoadFloat4x4(&m_viewMatrix)); worldMatrix = XMMatrixTranspose(worldMatrix);  XMVECTOR centerProjection = XMVector3Project(centerVector, 0.0f, 0.0f, screenSize.Width, screenSize.Height, 0.0f, 1.0f, projection, view, worldMatrix);  float centerScreenSpace = XMVectorGetX(centerProjection);  std::stringstream ss; ss << "CENTER SCREEN SPACE X: " << centerScreenSpace << std::endl; OutputDebugStringA(ss.str().c_str()); 

I have a window width 1262. The object is positioned at (0.0, 0.0, 0.0) and the current screen space X coordinate for this position is 631 which is correct. However, the problem occurs when I move the object towards the edges of the screen.

enter image description here

When I move the object to the left, the current screen space X coordinate for this position is 0.107788 when realistically it should be well above 0 as the center point is still on the screen and nowhere near the edge. The same happens on when moving the object to the right.

The screen size in pixels is correct, but something thinks that the screen has a premature cut-off like in the image below. It seems that the red dotted lines are the edges of the window when they’re not. I can fix this by adding an additional offset but I don’t believe that is the correct way to fix it and would like to know where I’m going wrong.

enter image description here

Does anyone know why the coordinates are incorrect?