ERROR: gcloud crashed (UnicodeDecodeError): ‘utf8’ codec can’t decode byte 0xc8 in position 145: invalid continuation byte

I have installed SDK and try to do gcloud init but i am getting below error.

can anyone please help on this.

khumansingh@DESKTOP-97PCNCB:~$ gcloud init Welcome! This command will take you through the configuration of gcloud.

Your current configuration has been set to: [default]

You can skip diagnostics next time by using the following flag: gcloud init –skip-diagnostics

Network diagnostic detects and fixes local network connection issues. Checking network connection…done. Reachability Check passed. Network diagnostic passed (1/1 checks passed).

ERROR: gcloud crashed (UnicodeDecodeError): ‘utf8’ codec can’t decode byte 0xc8 in position 145: invalid continuation byte

If you would like to report this issue, please run the following command: gcloud feedback

To check gcloud for common problems, please run the following command: gcloud info –run-diagnostics

Mostrar una imagen en Primefaces a partir de byte

Quiero ver una imagen con p:graphicImage a partir de byte[]. Mi código es el siguiente

public String foto() throws IOException {        String path = "C:\Users\Downloads\SFTP\images.jpg";       FileInputStream imagen = new FileInputStream (path);               byte[] image = IOUtils.toByteArray(imagen);   } 

Intenté lo siguiente <p:graphicImage value="#{imagen}" width="200"/> pero no me muestra nada

remove BOM byte (“” /ufeff””) from response

so im using m2e pro extension on magento 2.3 suddenly the synchronizing stopped from working. thats because that m2e pro uses their own cron job and they sends curl calls to the server, but my server response with BOM byte. script for exemple:

?php

$ ch = curl_init(); curl_setopt_array($ ch, array(

CURLOPT_URL => ‘http://my-sire-ip/index.php/M2ePro/cron/test/’,

CURLOPT_HTTPHEADER => array(‘Host: my-site-name.com’), CURLOPT_RETURNTRANSFER => 1 ));

var_dump(json_encode(curl_exec($ ch))); // return string(8) “”\ufeff””

// “\ufeff” – this is BOM byte it should not be in response.

the response is:

string(8) “”\ufeff””

this is what stopping from their cron to work on my server. how can i remove this BOM byte so it will work properly?

I am getting 401 unauthorised exception while downloading the file in byte array

using (WebClient client = new WebClient()) { client.Headers.Add(HttpRequestHeader.ContentType, "application/json;odata=verbose"); client.Headers.Add(HttpRequestHeader.Accept, "application/json;odata=verbose"); client.Headers.Add(HttpRequestHeader.ContentEncoding, "UTF-8"); client.Headers.Add("Authorization", "Bearer" + accessToken); client.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f"); client.Credentials = credential; Uri endpointUri = new Uri(webUrl + "/_api/web/GetFileByServerRelativeUrl('" + folderServerRelativeUrl + "/" + fileName + "')/$  value"); //string result = client.DownloadString(endpointUri); byte[] data = client.DownloadData(endpointUri); FileStream outputStream = new FileStream(path + fileName, FileMode.OpenOrCreate | FileMode.Append, FileAccess.Write, FileShare.None); outputStream.Write(data, 0, data.Length); outputStream.Flush(true); outputStream.Close(); }  

I am getting exception at byte[] data = client.DownloadData(endpointUri);. The endpoint Uri I have generated, its working fine when I’m hitting it in browser or using POSTMAN. I am able to download the file using endpoint Uri, its just not working through code.

I’m lost and couldn’t find anything. Any help will be useful for me.

Mono criando um dicionario de int e byte[]

Eu estou criando um projeto em C++ e utilizando a mono para trabalhar em conjunto com outro programa em C#, porem eu me deparei com um pequeno problema, eu preciso criar um dicionario de <int,byte[]> no C++ e passar ele para o programa em C#, preferencialmente eu não queria ter que criar uma função no projeto em C# para criar o dicionario, queria fazer ele no meu projeto em C++ e só passar para o projeto em C#, existe alguma forma de fazer isso?

Tensorflow js loadLayersModel range error on node 8.10/aws lambda “byte length of Float32Array should be a multiple of 4”, works on browser?

I have trained and saved an image classification CNN using tensorflow js following the browser example provided.

https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/index.html#6

Then I uploaded my model’s json and bin file to a public s3 bucket, and tried to load it like so in a lambda so that it can later on make and return a prediction when I call model.predict().

var tf = require('@tensorflow/tfjs'); global.fetch = require('node-fetch');  exports.handler = async function(event, context) {     var data = typeof event === String ? JSON.parse(event) : event;      var shidDataItem = data.shipDataItem;     var image = predictionData.tripImage;      console.log('getting model');     const model = await tf.loadLayersModel('https://s3.amazonaws.com/{my_bucket}/my-model.json');     console.log(model); } 

But when I test this I get the following error –

{   "errorMessage": "byte length of Float32Array should be a multiple of 4",   "errorType": "RangeError",   "stackTrace": [     "typedArrayConstructByArrayBuffer (<anonymous>)",     "new Float32Array (native)",     "_loop_1 (/var/task/node_modules/@tensorflow/tfjs-core/dist/io/io_utils.js:159:30)",     "Object.decodeWeights (/var/task/node_modules/@tensorflow/tfjs-core/dist/io/io_utils.js:189:9)",     "/var/task/node_modules/@tensorflow/tfjs-layers/dist/models.js:298:50",     "step (/var/task/node_modules/@tensorflow/tfjs-layers/dist/models.js:54:23)",     "Object.next (/var/task/node_modules/@tensorflow/tfjs-layers/dist/models.js:35:53)",     "fulfilled (/var/task/node_modules/@tensorflow/tfjs-layers/dist/models.js:26:58)",     "<anonymous>",     "process._tickDomainCallback (internal/process/next_tick.js:228:7)"   ] } 

However, this snippet works on the browser –

const model = await tf.loadLayersModel('https://s3.amazonaws.com/{my_bucket}/my-model.json'); 

Has anyone encountered this issue? Any suggestions for loading a model in a lambda to make a quick prediction? Thanks for your time.

How to convert a zip to byte array and read it as a stream (not loading the whole file to memory)?

I have an LZMA zip archive that I want to decompress using LZMA-JS, the issue is that the decompress method only accepts a byte array or node Buffer object and I do not know how to convert my zip file to a buffer object without loading the whole file to memory – so its best if it is some kind of read stream as the files I am trying to convert are huge.

It would be great if someone could suggest a way to create a byte array stream from zip file using Javascript – thank you so much! 🙂