Using html2Canvas API to generate images of custom aspx pages

So I have an html table I’d like to convert to an image and I found a pretty neat API to do exactly this. It’s called html2canvas. I have been able to use it pretty easily using HTML pages in my desktop but as soon as I try to run the exact same script in an aspx page on sharepoint absolutely nothing happens. No message, no error, nothing. I do NOT have access to the server so I am really locked into a server side solution. Here is the code I used (which worked for local html files).

<html> <head>     <script src=""></script>     <script src=""></script> </head>  <body> <div id="pdfWrapper" class="pdfWrapper" style="background-color:yellow;border:1px solid black;">     <h1 id="capNum" style="color:maroon;background-color:blue;">Test</h1>     <h2 id="title">some info goes here</h2>     <p id="description">Yadda yadda yadda. Yadda yadda yadda. Yadda yadda yadda. Yadda yadda yadda. Yadda yadda yadda. Yadda yadda yadda. Yadda yadda yadda.</p>     <table>         <tr style="background-color:red;"><td>a1</td><td>a2</td><td>a3</td></tr>         <tr style="background-color:blue;"><td>b1</td><td>b2</td><td>b3</td></tr>         <tr style="background-color:green;"><td>c1</td><td>c2</td><td>c3</td></tr>     </table> </div> <a href="javascript:viewAsImage();">View As Image</a> </body>  <script> function viewAsImage() {     html2canvas($  ("#pdfWrapper"),     {         onrendered: function (canvas)         {             var myImage = canvas.toDataURL("image/png");             var tWindow ="");              $  (tWindow.document.body).html("<img id='Image' src=" + myImage + " style='width:100%;'></img>");         }     }); } </script> </html> 

I’ve tried implementations where I don’t open a new page in case Sharepoint was blocking popups or something.. I am not really familiar with asp so it could just be something very simple I’m overlooking. The reason I can’t just use HTML is I am also using CSOM to populate some of the table fields from a Sharepoint list before I create the image.

PLEASE. Any help on this matter would be so appreciated.

Merging php and Nodejs docker images for a developer ready php image used for php application development and CI

For my application I need to provide an image that will be used to develop and release my php application. As well know a php application for the frontend required tools as well such as gulp webpack etc etc hence offering an image with them together with php-fpm would be beneficial for rapid development and buidling.

But theese tools mostly rerquire node.js and the npm tool as well and it can be offered via the node.js docker image. So I thought the following:

Merging Node.js and php image

In other words I thought to make a base php image containing the required extentions and then from it create other 2 images:

  • A development purpoce image containing the required tools for application and development by somehow merging 2 a Node.js image and the base php image.

  • Production-ready images containing the application and bare minimum in order to run it.

Furthermore the base image will utilize theese images as well .

So I wanted to know:

  • Does this logic makes some sense, considered good idea and best practice in this case?
  • Is plausible to merde 2 existing images into an 1 or I should manually install node.js as well?

How to integrate own images into this machine learning code by Dr. Martin?

I am fascinated by the code for machine learning by Dr. Martin at Github. This code gives very good accuracy with mNist data set. Now I would like to use this code to do some ML work with my own color images for a different application. I am new to ML but also to Python and learning. I got help to find where the mNist data is fed to the network.

Loading the images at line 37 : # Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels) mnist = mnistdata.read_data_sets(“data”, one_hot=True, reshape=False)

Feeding to the network at Line 98 : ….

# You can call this function in a loop to train the model, 100 images at a time def training_step(i, update_test_data, update_train_data):      # training on batches of 100 images with 100 labels     batch_X, batch_Y = mnist.train.next_batch(100)      # compute training values for visualisation     if update_train_data:         a, c, im, w, b =[accuracy, cross_entropy, I, allweights, allbiases], {X: batch_X, Y_: batch_Y})         print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")         datavis.append_training_curves_data(i, a, c)         datavis.update_image1(im)         datavis.append_data_histograms(i, w, b) 

However, I still can not figure out how to change this to load and feed my own images to the network.

Found one example at here.

How can I change the existing mNist loading with my own images? I would like to have my own mNist data set created with own images so that I could use with existing code.

Tensorboard (used via keras) only displays images for 3 filters per layer of my convnet. What is happening?

I am training a simple convolutional neural net with Keras and calling tensorboard to visualize the learning process. Under the Images tab I can see the images of the biases and weights for each layer, but only 3 images are displayed for each layer even though my network uses 32 filters for both of its conv layers. is there any way to fix this? The code and a screenshot of the problem are shown below

Screenshot of the images tab showing only 3 images

Here is the code I use:

    ########### create model model=Sequential() # first layer is convolutional layer with 61*585/600 as input and ReLu activation model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(61, 540, 1), W_regularizer=l2(0.08))) model.add(Dropout(0.25)) # Second layer is maxpool layer model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(Dropout(0.25)) # third layer is FC layer 8 neurons model.add(Flatten()) ##model.add(Dense(20, activation='relu')) ##model.add(Dense(10, activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dropout(0.25)) # last layer is FC layer 2 neurons model.add(Dense(2, activation='softmax'))   #compile model with gradient descent sgd = SGD(lr=0.005, decay=0, momentum=0, nesterov=False) ##AD=Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])  # add a callback to stop when the model stops improving #early_stp = EarlyStopping(monitor='acc', min_delta=0.0002,                              # patience=10, baseline=1.0)   # Create a TensorBoard instance with the path to the logs directory tensorboard = TensorBoard(log_dir='logs/{}'.format(time()), histogram_freq=1, write_graph=True, write_images=True) log_dir="logs/fit/" +"%Y%m%d-%H%M%S") tensorboard = TensorBoard(log_dir=log_dir, histogram_freq=1, write_graph=True, write_images=True) #fit the model, Y, batch_size=50, epochs=100, callbacks=[ tensorboard], validation_data=(Xtest,Ytest)) 

Any help will be greatly appreciated!

How can I create multiple cropped Media images without re-uploading them?

Solutions that I’ve found allow to crop Media, type Image when I’ve uploaded an image, but the media entity is not yet created (on the “Add media” page).

I need to upload a large full image once (and use it in full format on some nodes), and then crop it differently when I select it in different content types (to display small “zoomed” parts of the full image). So I need a manual crop to appear when I select a Media of type Image from a Media library. It can appear in Entity browser or on the node page.

Other solution is that I can create cropped images in advance being in my media library (/admin/content/media) or when adding a media (/media/add/image). The idea is that I do not want to reupload the image and add new title and alt.

Is this possible?