How to stop Mathematica 12.1 from chopping off the axes arrows?

Using the following code, I make a simple graph that is exported to pdf:

format = AxesStyle -> {{Thickness[.01], Arrowheads[{0.0, 0.05}]}, { Arrowheads[{0.0, 0.05}]} } graph = ListLinePlot[Table[{t, 2*t}, {t, 0, 100}], format, AspectRatio -> .2] Export[StringJoin[NotebookDirectory[], "\ext.pdf"], graph]; 

When I look at the graph, it cuts of the end of one of the arrows:

enter image description here

The y axis is ok – but the x axis is not. The end of the arrow has been clipped. It seems to be related to increasing the thickness of the axis.

For what its worth, the in-notebook display of the graph has complete arrows. It is only the exported version that is clipped.

I want both axes thick, and I want the whole arrow, and I want it in PDF. How to do this?

Please note that it didn’t have this problem in Mathematica 11.3; it is only after I upgraded that this problem has arisen. I’m using 12.1

Shaking axes when animating

I have the code below to animate a point. During the animation, the axes are shaking. I have added PlotRange, and PlotRangePadding -> None. Still shaking. If I export the animation to a gif, the Axes will stop shaking. Can anyone help?

  Graphics[    {     PointSize[0.02],     Point[{Cos[t], Cos[t]}]     },    PlotRange -> {{-2, 2}, {-2, 2}},    Axes -> True    ],   PlotRange -> {{-2, 2}, {-2, 2}},   PlotRangePadding -> None   ], {t, 0, 10, 0.1}]``` 

Axes Change in 3D Plot

I’d like to change the Axes. Please consider the following code:

Z[t_, \[Alpha]_] =t^3*\[Alpha];  Plot3D[Z[t, \[Alpha]], {t, 0, 2}, {\[Alpha], 0.5, 1}, AxesLabel -> {"t", "\[Alpha]", "Z"}] 

I want to chenge the Axes Z with \[Alpha], t with Z and \[Alpha] with t. enter image description here Any suggestion?

why does changing axes size work differently inside and outside a function?

I’m working with existing code that creates a matplotlib figure with one axes, and adds a colorbar to it after-the-fact using make_axes_locatable and append_axes. It works, but I want to subsequently change the vertical height of the colorbar. I’ve figured out that this is only possible if I call the set_axes_locator(None) method on the colorbar axis (not 100% sure why) — if I don’t do that, any calls to cax.set_position() silently do nothing. Here’s the setup; question below:

import numpy as np import matplotlib.pyplot as plt from matplotlib.colorbar import ColorbarBase from mpl_toolkits.axes_grid1 import make_axes_locatable   def make_figure(n_levels, shrink=False):     data = np.random.randint(n_levels, size=(5, 7))     fig, ax = plt.subplots()     ax.imshow(data)     divider = make_axes_locatable(ax)     cax = divider.append_axes('right', size='5%', pad=0.1)     cmap = ax.images[0].get_cmap()     cmap = cmap._resample(n_levels)     _ = ColorbarBase(cax, cmap=cmap, norm=None, orientation='vertical')     if shrink:         shrink_colorbar(cax, n_levels)     return fig   def shrink_colorbar(cax, n_levels):     pos = cax.get_position().bounds     height = pos[-2] * n_levels     new_y = pos[1] + (pos[-1] - height) / 2     newpos = (pos[0], new_y, pos[2], height)     cax.set_axes_locator(None)     cax.set_position(newpos) 

If I do the colorbar resizing inside the function environment where the figure is created, I get the wrong result every time, whether I use regular Python REPL or iPython:

n_levels = 3 make_figure(n_levels, shrink=True) 

WRONG RESULT:

plot with overlarge colorbar that obscures main figure - wrong result

If I create the figure first, then shrink the colorbar after the fact, it works how I want it to as long as these lines are run separately (either in regular Python REPL with plt.ion(), or if each line is in a separate iPython cell):

fig = make_figure(n_levels) cax = fig.axes[-1] shrink_colorbar(cax, n_levels) 

CORRECT RESULT:

plot with colorbar in correct location

If those same lines are run as a single iPython cell, or if run with plt.ioff() in a regular Python REPL and followed by plt.show(), I get the same bad result as the first example above (with shrink=True in the outer function).

How can I get the correct result while still doing the colorbar resizing inside the function (instead of in userland)?

Projection of a polytope along 4 orthogonal axes

Consider the following problem:

Given an $ \mathcal{H}$ -polytope $ P$ in $ \mathbb{R}^d$ and $ 4$ orthogonal vectors $ v_1, …, v_4 \in \mathbb{R}^d$ , compute the projection of $ P$ to the subspace generated by $ v_1, …, v_4$ (and ouput it as an $ \mathcal{H}$ -polytope).

I know that the problem of computing projections along $ k$ orthogonal vectors in NP-hard (if $ k$ and $ d$ are part of the input), as shown in this paper. But does it help if $ k$ is a constant? Specifically, does it help if $ k \leq 4$ ? Do we have a polynomial algorithm in this case?

Select 4 points of $n$ in 2d to make rectangle with the greatest area and sides parellel to the axes

On the plane $ n$ points $ (x_i, y_i)$ are marked. Select 4 points so that they define a rectangle with the greatest area and sides parallel to the axes.

Time limit for python is 10 seconds, for other programming languages – 2 seconds.

Input data:

  • in first string integer $ n$ , $ (4 \leq n \leq 3000)$
  • in next $ n$ strings pairs of integer coordinates $ x_i\ y_i$ $ (-10\ 000 \leq x_i,\ y_i \leq 10\ 000)$

Output data:

  • 4 different indices (numbers from $ 1$ to $ n$ ), specifying the vertices of the rectangle.

I made in python, but even with tests of $ n \leq 111$ it have TL.

n = int(input()) l = [] for i in range(n):     a, b = map(int, input().split())     l.append((a, b))  ans = [1, 2, 3, 4] mS = 0  for i in range(0, n - 3):     for j in range(i, n - 2):         for k in range(j, n - 1):             for t in range(k, n):                 r = [l[i], l[j], l[k], l[t]]                 w = sorted(r, key=lambda element:(element[0], element[1]))                 if w[0][0] == w[1][0] and w[1][1] == w[3][1] and w[3][0] == w[2][0] and w[2][1] == w[0][1]:                     s = (w[1][1] - w[0][1]) * (w[3][0] - w[1][0])                     if s > mS:                         mS = s                         ans = [i + 1, j + 1, k + 1, t + 1]  ans = sorted(ans) print(ans[0], ans[1], ans[2], ans[3])  

Interpolate on a cylic x axes

Let’s assume you are in 2D space and you have a set of fix points FIX_POINTS = [(x1, y1), (x2, y2)]. I want to interpolate the y value for a given x value using linear interpolation.

Caveat: The x-axis is cyclic, so it wraps around at a certain known value.

I’m searching a neat algorithm (or way to write this down) which can do this interpolation, so take an x value on the cyclic x-axes and compute the y- value using linear interpolation.

Example: Given FIX_POINTS = [(10, 50), (90, 100)], and a “cyclic interval” (so the value at which the x axes wraps) of 100, the interpolation would lead to the following results:

  interpolation(10) = 50 # No interpolation necessary, directly on fix points   interpolation(90) = 100 # Same    interpolation(50) = 75 # Normal Interpolation   interpolation(0) = 75 # Wrap around interpolation   interpolation(1) = 72.5 # Same   interpolation(99) = 77.5 # Same  

I’m not too curious in the actual programming, I’m searching for a way to write this down nicely. Maybe there is even an existing implementation for this? I’m having troubles asking google for the lack of search terms.

I implemented it myself, but it got a lot of code, so I’m in search of something simpler: https://gist.github.com/theomega/9782be548fd452e1f1469757387b35e4 . This implementation also is far from optimal on the computational side (scanning over the array).

I’m not sure if this is the stack exchange for this. Feel free to point me in a different direction.