How to edit SharePoint page content/URL using client Object Model

I have SharePoint Site on SharePoint online with Pages library having 5 pages with content. Need to modify the URL in the pages programmatically using Client Object Model

suppose I have Newname.aspx page and this page is having some links which I need to modify pragmatically. Newname.aspx is having 15 broken links, every time I have to open this page in edit mode and manually update all broken links.

Is there any way to modify these broken links programmatically.

IN OAuth 2.0, how is the client secret supposed to be kept secret?

Using most OAuth 2.0 flows, a client application can identify itself to the authorization server by means of a “client id” and “client secret.”

The OAuth 2 specification says that the client secret should indeed be kept secret.

However, if the client secret is inside of the application, then it’s not secret – someone can use a debugger, disassembler, etc to view it.

So I am not sure the effectiveness and/or purpose of this client secret. Furthermore, are there any recommendations for securing a client secret on a client under the control of the general populace? The purpose here is to establish some form of trust between the client application and the Authorization server, independent of the resource owner (user).

Finally, what is the difference between using an OAuth flow without a client secret vs. using one with a client secret and not keeping that “client secret” actually secret?

How to send messages in the order they were queued, while ensuring that client B does not have to wait until client A has received his message?

I have a simplified producer/consumer pattern implemented below. The code outputs:

“A”

1 second delay

“B”

1 second delay

“A”

1 second delay

“B”

What approach can I take here to get rid of the 1-second delay between different letters?

What I’m looking for is something like

“A”

“B”

1 second delay

“A”

“B”

It’s important, that clients A and B receive the messages in the order the messages were queued in, but I do not want other clients to be blocked while processing for one client takes a really long time. Using two BlockingCollections and two consumer threads is not an option, because the user count is dynamic.

using System; using System.Collections.Concurrent; using System.Threading.Tasks;  namespace ConsumerProducer {     public enum ClientId     {         A,         B     }      class WebSocketMessage     {         public ClientId ClientId { get; }          public WebSocketMessage(ClientId clientId)         {             ClientId = clientId;         }          public async Task LongRunningSend()         {             Console.WriteLine(ClientId);             await Task.Delay(TimeSpan.FromSeconds(1));         }     }      class Program     {         public static BlockingCollection<WebSocketMessage> Messages = new BlockingCollection<WebSocketMessage>();          static async Task Main(string[] args)         {             var consumer = Task.Run(async () =>             {                 foreach (var message in Messages.GetConsumingEnumerable())                 {                     await message.LongRunningSend();                 }             });              ClientId clientId = ClientId.B;             while (true)             {                 // Flip between A and B                 clientId = clientId == ClientId.A ? ClientId.B : ClientId.A;                  Messages.Add(new WebSocketMessage(clientId));                  await Task.Delay(TimeSpan.FromMilliseconds(100));             }         }     } } 

Verifying Workflow Manager Client Installation

This is a similar question to here but I was asked to post a new question instead of posting on that one. So, my question is regarding Workflow Manager Client. I just installed the WFM server component on the app server in my SP farm (the farm is a small farm with one WFE, one app server and one DB backend).

As per the instructions here, it says for my scenario (installing WFM on a server in the SP farm and using HTTPS), I was supposed to:

  • Install WFM
  • Run the Register-SPWorkflowService cmdlet to pair the WFM farm with the SP farm
  • Install the WFM Client on the remaining SP farm servers

…in that order. So that is exactly what I did. Everything went smoothly with both the WFM server and client installations. But after installing the client, when I run the following cmdlets on the WFM server:

Get-WFFarm Get-WFFarmStatus 

…it only shows the WFM server (SP app server) as the only machine in the farm (the output of Get-WFFarmStatus shows it as both the frontend and backend). But there is no mention in the output of either cmdlet about the WFE, which is running the WFM Client.

So, I am trying to find out how I can verify that the client is properly joined to the WFM farm. And I’m also wondering if the instructions at the MS link above are incorrect, because many other things I’m reading now are saying to run the Register-SPWorkflowService cmdlet after installing the WFM client on all SP servers, which contradicts the info in the MS article. The article (which is recent – 2018) only says to do things in that order if installing WFM on a server outside the SP farm, which was not my scenario. But if I did do things in the wrong order – should I run the cmdlet again (now that the client is installed)? Is there any harm in doing so?

I have done a lot of Googling on this issue and I have not been able to find any info at all on how to validate a Workflow Manager Client installation/how to confirm that a WFM client is properly joined to the WFM farm and communicating w/ the WFM server properly. The Workflow Service App Proxy says it is running and when I go to its details, it says “Workflow is connected”. But again, this gives me no info on the client.

Any help is much appreciated!

Obfuscating “sensetive” strings in mobile client

We have a client that runs some native (C++) code on both Android and iOS, to mitigate MITM attacks we use certificate pinning.

This means that the binary includes some strings (const char * const bla = "XXXXXXXXXX") that identify the allowed certs to enable HPKP.

Some are worried that nefarious users will easily identify those strings because they look like SHA256 and are passed to relevant functions, replace them and analyze the traffic.

  • Would it make it objectively harder if we obfuscated those strings in compile-time and then de-obfuscate them at run-time?
  • Would it make it worse because now instead of being in R/O memory (Speculation… I know… it’s not required by the standard but it makes a lot of sense) we just read it at run-time from some regular object?

HTTP 404 for REST calls from client side (browsers)

I’m having troubles with the issue of distinguishing between “real” 404s and 404s where the path is correct but the id, for example, doesn’t exist when it comes to client side apps.

Most REST articles and answers here talk about returning a 404 if a resource isn’t found. I do understand that a REST URI points to a specific resource and therefore, whether the requested resource isn’t there (no such id) or the whole path is wrong, 404 is still the response.

The problem is that browsers tend to treat 404s as actual errors before the request even reaches the app code, which pollutes the console and hides real 404s (image isn’t present on the CDN). The second issue is that 400 HTTP codes are described everywhere as client errors. But if the path is correct by an ID is not present, this isn’t technically an error. It’s proper and expected functionality. It looks akin relying on exceptions for logical flow in the code.

Is there a proper way to handle such scenarios, without spamming the browser console?

SharePoint as a Client Portal

For the record, I am not familiar with SharePoint.

My company is looking to create a file portal for clients to access our documentation. This is strictly a need for the clients; there is no need for internal sharing. We need to have a folder structure and be able to assign permissions.

1) Is SharePoint a good option for us?

2) Are these features available on SharePoint out-of-the-box?

API Gateway + Client App in production

I am currently embarking on my first SaaS microservices-based application, built with .NET Core and Docker containers, but there is something I am struggling to get my head around, how would things sit when they are deployed to a production environment.

What I have at the moment is a system which can be called from 2 different client applications. One which is an external web application, and another which is a Single-Page Application which is being built alongside the microservices.

My understanding of the API Gateway pattern is that it is a single point of entry for client applications to access the underlying microservices, and to achieve that my idea was to use Ocelot and I could route anything with /api/{whatever} to the various microservices underneath. One complexity there though is that my application has a wildcard subdomain (users create the subdomain when registering) and not sure Ocelot supports it? Eitherway, I would also have the SPA which would be on the same domain and then the identity proivder too (under /account, for example).

Now assuming I have these 3 in place, would I then need something sat on top of the SPA, API Gateway and Identity Provider to route those top level directories? (e.g Traefik). Other guidance I have read suggests using the API gateway to handle those additional routes, but then it seems like it defeats the purpose of the API gateway as the flow for the SPA would then be:

Browser > API Gateway > SPA > [User Action on SPA] > API Gateway > Microservices

Rather than what I would imagine should be:

Browser > SPA > [User Action on SPA] > API Gateway > Microservices

So, my questions are:

  • What guidance would you suggest for deployment of an API Gateway, SPA and Identity Provider under the same domain?
  • How would I configure Ocelot to handle wildcard subdomains, or is that not possible?

Failing to connect to remote desktop using VNC client on MAC

My local system:

MacOS Mojave 10.14.6 

I launched an instance on AWS, as follows:

Ubuntu 18.04 

I had the security group set up to allow SSH and RDP from everywhere (0.0.0.0/0)

I downloaded the server key, and used it to connect to the remote server, from a Mac Terminal, as the ubuntu user.

I then installed the LXDE desktop on the Ubuntu server

sudo apt update -y sudo apt upgrade -y sudo install lxde -y 

When that was finished, I installed xrdp

sudo apt install xrdp -y 

I then set up a password for the user ubuntu:

sudo passwd ubuntu 

I downloaded the VNC Viewer to my Mac, installed it and started it.

I use Goddady, so I created a DNS entry for my remote server. When I ping it:

ping xxx.xxxxx-swdev.com PING xxx.xxxxx-swdev.com (xx.xxx.40.14) 56(84) bytes of data. ^C --- rxxx.xxxxx-swdev.com ping statistics --- 251 packets transmitted, 0 received, 100% packet loss, time 256005ms 

I see that it’s recognizing the IP address of the server, but it does not come back, and when I hit CTRL-C, it breaks with the above info.

When I open up the VNCConnect client on my mac, and enter the DNS name into the connect dialog, it tries for a while, and then comes back with a message telling me that it timed out. Same result if I enter the IP address of the server.

Obviously, I am missing some steps. Any ideas?

install Magento 2 API Client

I want to install this dial but I do not understand or obtain GIT_USER_ID and GIT_REPO_ID.git

https://github.com/netz98/magento2-swagger-api-client-demo

{   "repositories": [     {       "type": "git",       "url": "https://github.com/GIT_USER_ID/GIT_REPO_ID.git"     }   ],   "require": {     "GIT_USER_ID/GIT_REPO_ID": "*@dev"   } }