$batch not working sharepoint online rest api

Trying to work out the $ batch REST api in sharepoint 365

My request payload (simple GET request)

--batch_45993c25-30ce-459d-830f-a5aabaf4901f Content-Type: application/http Content-Transfer-Encoding: binary  GET https://$  {tenant}.sharepoint.com/test/_api/Web/Lists/GetByTitle('List')/Items?$  orderby=Title HTTP/1.1 Accept: application/json;odata=verbose  --batch_45993c25-30ce-459d-830f-a5aabaf4901f-- 

Creating One item and getting it back

--batch_28fcce05-10f9-4362-e7fb-55208b1ec9d8 Content-Type: multipart/mixed; boundary=changeset_eabea18e-488d-46fc-ba25-841d268c61c4 Content-Length: 406 Content-Transfer-Encoding: binary  --changeset_eabea18e-488d-46fc-ba25-841d268c61c4 Content-Type: application/http Content-Transfer-Encoding: binary  POST https://$  {tenant}.sharepoint.com/test/_api/Web/Lists/GetByTitle('List')/Items HTTP/1.1 Content-Type: application/json;odata=verbose  {"Title":"2","Message":"2","__metadata":{"type":"SP.Data.ListListItem"}}  --changeset_eabea18e-488d-46fc-ba25-841d268c61c4--  --batch_28fcce05-10f9-4362-e7fb-55208b1ec9d8 Content-Type: application/http Content-Transfer-Encoding: binary  GET https://$  {tenant}.sharepoint.com/test/_api/Web/Lists/GetByTitle('List')/Items?$  orderby=Title HTTP/1.1 Accept: application/json;odata=verbose  --batch_28fcce05-10f9-4362-e7fb-55208b1ec9d8-- 

The response I get is for both of them.

{"error":   {"code":"-1, Microsoft.Data.OData.ODataContentTypeException",   "message":     {"lang":"en-US",     "value":"A supported MIME type could not be found that matches the content type of the response. None of the supported type(s) 'multipart/mixed' matches the content type 'application/json;odata=verbose'."     }   } }  

The request is exactly similar to the docs and still no luck:


Can someone help?

Batch convert an entire folder of pdf files to jpg or png using pdftoppm in ubuntu

I have a entire folder of nearly 4000 pdf files that had gotten accidentally scanned into pdf files instead of jpg by a co worker. we were scanning nearly 7000 paper files and at some point second shift somehow changed the saves to pdf. this was originally done in windows, I have ben chasing a way to correct this for over a week and everyone says use linux, along the way i had installed mint, then dumped that for ubuntu. I know nothing about command line all i need is a simple command to make this convert the contents of the entire folder. the problem being out of all the files that were scanned we dont know from the originals which ones were done correctly and which ones were done as pdf or we could just re scan them — all the pdf files are separated in a folder by themselves i have no idea what to type, where to type it at or how to get this to work. HELP please!

Mass / batch – update / edit cells with in a grid

I am looking for some ideas on the UX of allowing a user to edit multiple cells in a grid at one time. These users may have thousands of records and they need to provide data on each record, this data may be duplicated across multiple records so I am trying to figure out a clean way to allow them to apply a value to multiple cells easily.

Hoping to not implement excel in my application.

Any help would be appreciated 🙂

Tengo problemas con copiado en Batch

les comento, tengo un directorio en windows repleto de archivos txt, todos numerados consecutivamente (ejemplo: f1,f2,f3 ,etc) son mas de 2 millones de ellos, lo que necesito es un script que copie cada 400 archivos y cree una carpeta con nombre numerico (ejemplo: folder1, folder2, folder3, etc), para copiar los siguientes 400 archivos, tengo el siguiente script, el problema es que me copia todo a una sola carpeta que crea

@echo off 

rem COUNT set c=0

rem FOLDER ID set f=0

mkdir folder%f% echo Copiando a folder%f%…

for %%i in (*) do ( if %%c LSS 400 ( rem COPY TO CURRENT FOLDER copy %%i folder%f%\


) else ( rem INCREASE FOLDER ID set /a f+=1

rem ADD A NEW FOLDER mkdir folder%f% echo Copiando a folder%f%...  rem RESET ITERATION COUNT set c=0 

) )

Copy data from one CSV File to another, in successive interval based on number of rows using batch script

I have requirement where I am having one CSV file say FILE1 which is getting generated dynamically daily with n number of rows. I have another CSV with n number of rows FILE2(with one time data). My requirement is to copy data from FILE2 to FILE1 but number of rows after merging should not exceed 500k. And remaining data should get appended on next day File1.

For example- Day1: File1 is generated with 100k rows and File2 is kept in repository having total of 700k rows then: Pick only 400K records from File2 and append in File1. So that total should not exceed 500k

Day 2: File1 is generated with 50k rows then append the remaining rows from File2.

Similarly File1 rows can vary and if after merging it exceeds 500k records then rest data from FILE2 should be picked up in next day batch.

I am new to batch script and have somehow manage to merge 2 Files but I am not able to get the row count and merge partially as per requiremnet any help is appreciated.

Перемещение файлов cmd, bat, Batch

Тупой вопрос специально для 2-ух часов ночи, bat не знает где он запущен(С:33\bat.bat), ему надо копировать файл 12321.bat из C:333\ в C:33\, как ему это сделать не зная полного пути? Bat должен запускаться из любого места и чётко копировать файл из /123/ в /

Listados en batch

Mi objetivo es hacer una lista de todas los archivos que hay en una carpeta, incluidos los archivos que estén a su misma vez en una carpeta. También quiero saber la fecha en la que fueron editados por última vez.

Mi primera idea fue usar el comando dir y poner la ruta a la carpeta que quiero analizar. Cuando lo hize usé este código: dir C:\Users\Garci\Onedrive\Escritorio\Prueba > Lista.txt

Este es el resultado del código anterior

Pero esto no me enseña el archivo que hay dentro de Car2 que es una carpeta. Pero tiene todo lo demás, es decir la última fecha de modificación Después estuve buscando y encontré el comando tree con el cual hize esto:

tree C:\Users\Garci\Onedrive\Escritorio\Prueba /f /a > Listado.txt

Resultado del código anterior

Al usar este código me salen todas las carpetas y sus subarchivos pero no me dice la fecha de modificación.

Alguien me podría decir como conseguir mi objetivo. Lo ideal sería que pudiera poner la fecha en al lado de los archivos en el batch que he usado el comando tree. Gracias.

Batch retrieve formatted address along with geometry (lat/long) and output to csv

I have a csv file with 3 fields, two of which are of my interest, Merchant_Name and City.
My goal was to output multiple csv files each with 6 fields, Merchant_Name, City, name, formatted_address, latitude, longitude.

For example, if one entry of the csv is Starbucks, Chicago, I want the output csv to contain all the information in the 6 fields (as mentioned above) like so-
Starbucks, Chicago, Starbucks, "200 S Michigan Ave, Chicago, IL 60604, USA", 41.8164613, -87.8127855,
Starbucks, Chicago, Starbucks, "8 N Michigan Ave, Chicago, IL 60602, USA", 41.8164613, -87.8127855
and so on for the rest of the results.

For this, I used Text Search request of Google Maps Places API. Here is what I wrote.

import pandas as pd # import googlemaps import requests # import csv # import pprint as pp from time import sleep import random   def search_output(search):     if len(data['results']) == 0:         print('No results found for {}.'.format(search))      else:          # Create csv file         filename = search + '.csv'         f = open(filename, "w")          size_of_json = len(data['results'])          # Get next page token         # if size_of_json = 20:             # next_page = data['next_page_token']          for i in range(size_of_json):             name = data['results'][i]['name']             address = data['results'][i]['formatted_address']             latitude = data['results'][i]['geometry']['location']['lat']             longitude = data['results'][i]['geometry']['location']['lng']              f.write(name.replace(',', '') + ',' + address.replace(',', '') + ',' + str(latitude) + ',' + str(longitude) + '\n')          f.close()          print('File successfully saved for "{}".'.format(search))          sleep(random.randint(120, 150))   API_KEY = 'your_key_here'  PLACES_URL = 'https://maps.googleapis.com/maps/api/place/textsearch/json?'   # Make dataframe df = pd.read_csv('merchant.csv', usecols=[0, 1])  # Construct search query search_query = df['Merchant_Name'].astype(str) + ' ' + df['City'] search_query = search_query.str.replace(' ', '+')  random.seed()  for search in search_query:     search_req = 'query={}&key={}'.format(search, API_KEY)     request = PLACES_URL + search_req      # Place request and store data in 'data'     result = requests.get(request)     data = result.json()      status = data['status']      if status == 'OK':         search_output(search)     elif status == 'ZERO_RESULTS':         print('Zero results for "{}". Moving on..'.format(search))         sleep(random.randint(120, 150))     elif status == 'OVER_QUERY_LIMIT':         print('Hit query limit! Try after a while. Could not complete "{}".'.format(search))         break     else:         print(status)         print('^ Status not okay, try again. Failed to complete "{}".'.format(search))         break 

I want to implement next page token but cannot think of a way which woudn’t make it all a mess. Another thing I wish to improve is my csv writing block. And dealing with redundancy.
I further plan to concatenate all the csv files into one(but still keeping the original separate files).

Please note that I’m new to programming, in fact this is actually one of my first programs to achieve something. So please elaborate a bit more when need be. Thanks!