## Extendind a a function defined in $\mathbb{R}^n\setminus\{0\}$ to a continuous function defined in $\mathbb{R}^{n}$.

Let $$g: \mathbb{R}^n\setminus\{0\} \to \mathbb{R}$$ be a function of class $$C^{1}$$ and suppose that there is $$M > 0$$ such that $$\left|\frac{\partial}{\partial x_{i}}g(x)\right| \leq M.$$ Prove that if $$n \geq 2$$ then $$g$$ can be extended to a continuous function defined in $$\mathbb{R}^n$$. Show that if $$n = 1$$ the statement is false.

My attempt.

I want define the extension $$\bar{g}: \mathbb{R}^n \to \mathbb{R}$$ by $$\bar{g}(x) = g(x)$$ if $$x \in \mathbb{R}^{n}\setminus\{0$$} and $$\displaystyle \bar{g}(0) = \lim_{x \to 0}g(x)$$. Thus $$\lim_{x \to 0} \bar{g}(x) = \lim_{x \to 0}g(x) = \bar{g}(0)$$ and so, $$\bar{g}$$ is continuous. So the question is reduced to prove that $$\displaystyle \lim_{x \to 0}g(x)$$ exists. Thus, I must show that $$\forall \epsilon > 0, \exists \delta > 0\text{ s.t. } \Vert X \Vert < \delta \Longrightarrow |g(x)|<\epsilon.$$

The hypothesis $$\left|\frac{\partial}{\partial x_{i}}g(x)\right| \leq M$$ seems necessary for get $$|g(x)-g(y)| \leq M|x-y|$$ using the Means Value Inequality. This almost solves the problem, because if $$g(0) = 0$$ we can write $$|g(x)| \leq M|x|.$$ But $$g(0) = 0$$ doesnt make sense. I’m stuck here.

Also, I cannot see the whe is necessary $$n \geq 2$$, where I use this in the demonstration and why it fails when $$n=1$$.

## Failed to get document content data. System.ComponentModel.Win32Exception (0x80004005): Cannot complete this function

Hi there is a publishing web application and below errors keep loading in SharePoint logs and some pages link with a custom master page (_catalogs/masterpage/NewDesign/images/login_icon.png), not loading in browser and 401 error when administrator user login to the site.

Failed to get document content data. System.ComponentModel.Win32Exception (0x80004005): Cannot complete this function at Microsoft.SharePoint.SPSqlClient.GetDocumentContentRow(Int32 rowOrd, Object ospFileStmMgr, SPDocumentBindRequest& dbreq, SPDocumentBindResults& dbres)

Could not get DocumentContent row: 0x80004005.

## Derivative of an inverse of a function

I’m new into Calculus. Given $$f:\mathbb{R}\rightarrow \mathbb{R}$$, I know that $$f(0)=2$$ and $$f'(0)=3$$. I can’t understand why, given a function $$g=f^{-1}$$, it is true that $$g'(2)=\frac{1}{3}$$.

## Which Orlicz functions $f$ make the function $f^{-1}\left(\frac{\sum_{j=1}^s f(x_j)}{s}\right)$ convex?

Let $$f:\mathbb{R}_+\to\mathbb{R}_+$$ be an Orlicz function, or sometimes referred to as an Young function, i.e. it is a convex, non-decreasing function such that $$f(0)=0$$. I am trying to study the convexity of the function $$\phi:\mathbb{R}^n\to\mathbb{R}$$, such that $$\phi(\mathbf{x})=f^{-1}\left(\frac{1}{s}\sum_{j=1}^s f(|{x}_j|)\right)$$. Note that when $$f(x)=|x|^p,\ p\ge 1$$, then $$\phi(\mathbf{x})=\frac{\|\mathbf{x}\|_p}{s^{1/p}}$$, which is a convex function. Similarly, a bit of calculation shows that the function $$\phi$$ is also convex when $$f(x)=e^{ax}-1$$ for any $$a>0$$. So intuitively I thought that this result might hold for all Orlicz functions $$f$$. However, I have not been able to prove this result, and in fact, I think this does not hold for many functions, for example, $$f(x)=e^{x^2}-1$$. Then, I am left with the investigation of such Orlicz functions $$f$$, that make the corresponding $$\phi$$ convex. However, assuming that $$f$$ is double differentiable, the Hessian of $$\phi$$ is turning out to be too difficult to analyze. At this point, I am not sure how to proceed to find the properties of $$f$$ that make $$\phi$$ convex. Can anyone kindly suggest some ideas, or point me to some relevant references that can help me proceed in this investigation? Thanks in advance.

## Module imported in one function NameError in another function called afterward

I have a Python3 script that installs pip3 and and a digitalocean module for creating droplets.

I have broken up the script into 3 functions, Install(), Run(), and Uninstall. In the Install function I can install pip3 and the digitalocean module.

I have multiple functions that I want to call in the Run() function. At the beginning of Run() I import the digitalocean module. When I call another function that uses this module I get “NameError: name ‘digitalocean’ is not defined”.

Everything I have read says that I can import in a function and then use that import in another function. I don’t know if Python3 is diffrent? Something I am missing?(has to be)

Here is relevant code that has the bulk pulled out. Let me know if you need more.

#!/usr/bin/python3  import importlib.util from subprocess import Popen, PIPE, STDOUT import sys import subprocess import time   accessToken                 = 'ABC' dropletName                 = 'newDropletAndTag' tagName                     = dropletName  def Install():     pass     #This function installs the package and other things if they are not already present.    def CreateDroplet():     newDroplet = digitalocean.Droplet(  token       = accessToken,                                          name        = dropletName,                                         region      = 'NYC1',                                         image       = 'ubuntu-16-04-x64',                                         size_slug   = 's-1vcpu-1gb',                                         ssh_keys    = sshKeysList,                                          backups     = False                                         )      def Run():     import digitalocean     myManager = digitalocean.Manager(token=accessToken)     myDroplets = myManager.get_all_droplets(tag_name=tagName)      Install()      CreateDroplet()   def Main():     #START OF SCRIPT     print('\n\n\n')     print('---- Start Of Script ----')     Run()     print('---- End Of Script ----')     print('\n\n\n')     #END OF SCRIPT if __name__ == '__main__':     Main() 

## Sufficient condition for the absolute convergence of Fourier series of a function on the adele quotient $\mathbb A_k/k$

Let $$G$$ be a compact abelian group. The unitary characters of $$G$$ form an orthonormal basis of $$L^2(G)$$, so every square integrable function $$f: G \rightarrow \mathbb C$$ admits a Fourier expansion

$$f(x) = \sum\limits_{\chi \in \hat{G}} c_{\chi} \chi(x) \tag{1}$$

where the $$c_{\chi}$$ are uniquely determined complex numbers satisfying $$\sum\limits |c_{\chi}|^2 < \infty$$, and the right hand side converges to $$f$$ in the $$L^2$$-norm.

If moreover $$\sum\limits |c_{\chi}| < \infty \tag{2}$$ then (1) is actually a pointwise limit (and in fact a uniform limit).

When $$G = \mathbb R/\mathbb Z$$, it is well known that a sufficient condition for (2) is that $$f$$ be smooth (even just $$C^1$$).

What about when $$G = \mathbb A_k/k$$ for $$k$$ a number field, and $$\mathbb A_k$$ the adeles of $$k$$? There is a notion of a smooth function on $$\mathbb A_k$$ (being smooth in the archimedean argument, and locally constant in the nonarchimedean). Does the Fourier series of a smooth function $$f$$ on $$\mathbb A_k/k$$ satisfy (2)? Or if not, is there a well known sufficient condition on $$f$$ for (2) to hold?

## Bounded function with finitely many discontinuities is integrable $\overset{?}{\Rightarrow}$ density of continuous distribution function is not unique

The density function of the distribution function of a continuous random variable is not uniquely defined.
A new density function can be obtained by changing the value of the function at finite number of points to some non-negative value, without changing the integral of the function. We then get a new density function for the same continuous distribution.

Does this follow from the theorem-

A bounded function with finite number of discontinuities over an interval is Riemann integrable.

or is there a different theorem supporting the above claim? Is the theorem a sufficient justification?

## magento 2.3 Fatal error: Uncaught Error: Call to undefined function mime_content_type() in

Fatal error: Uncaught Error: Call to undefined function mime_content_type() in when trying to upload image from admin blocks please check screenshot

## My python code is not returning anything even when there are no blockages inside the function, upon function call

Here is my code:

class Solution:     def searchInsert(self, nums, target):         """         :type nums: List[int]         :type target: int         :rtype: int         """         flag = 0         mid = int(len(nums)/2)         if (target == nums[mid]):             print("entered here") #             flag = 1             return 100         if ((target<nums[mid]) and(mid > 0)):             self.searchInsert(nums[:mid], target)         if ((target>nums[mid]) and (mid < (len(nums)-1))):             self.searchInsert(nums[mid :], target) 

Considering the search value has to be in the array, when I run this I get no return value

s = [1,3,5,6] v = 6 ret = Solution().searchInsert(s,v) 

My output to this is: entered here

Why is my code not returning 100 when it is right next to the print statement which is being executed?

## Proof of heuristic function in A star algorithm

Maybe I am missing something very easy and obvious.

But, I don’t understand why estimate cost of source vertex is subtracted from the overall estimate cost, if heuristic function $$h$$ is monotonic: $$d′(x,y)=d(x,y)+h(y)−h(x)$$

What I currently know:

A* algorithm can be used as an extension for Dijkstra’s algorithm. At each iteration of its main loop, it chooses the vertex with the minimum of estimation cost plus cost of the path to this vertex:

For vertex $$u$$ and its successor $$v$$, overall cost is calculated with $$f(u, v) = d(u, v) + h(v)$$ using some heuristic function $$h$$. Where:

• $$d(u,v)$$ cost of the path from $$u$$ to $$v$$
• $$h(v)$$ estimate cost from $$v$$ to the target vertex $$t$$

If for any adjacent vertices $$u$$ and $$v$$, it is true that $$h(u) <= d(u, v) + h(v)$$ then $$h$$ is a monotonic. In other words, graph holds triangle inequality property.

It is stated in Wiki page of A* algorithm:

If the heuristic h satisfies the additional condition $$h(x) ≤ d(x, y) + h(y)$$ for every edge $$(x, y)$$ of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. In such a case, A* can be implemented more efficiently—roughly speaking, no node needs to be processed more than once (see closed set below)—and A* is equivalent to running Dijkstra’s algorithm with the reduced cost $$d'(x, y) = d(x, y) + h(y) − h(x)$$.

My questions are:

and A* is equivalent to running Dijkstra’s algorithm with the reduced cost $$d'(x, y) = d(x, y) + h(y) − h(x)$$.

Any proof for this equivalence ?

It is clear for me that $$0 <= d(x, y) + h(y) – h(x)$$, and it is feasible. But:

• Why this formula is chosen as a new distance function ?
• Is there any formal proof that it works ?
• Why it is not enough to run Dijkstra with $$d'(x, y) = d(x, y) + h(y)$$ ?
• What is the math behind it ?