SP 2010 – User Email Alerts without SP Access

Hoping for some insight, is it possible to set up users to get email alerts from a list and somehow stop them from being able to actually see anything on the actual SharePoint site?

I tried the route of not giving the users site permission and setting them up with email alerts, but they don’t get anything after the initial “set up” email, so my current theory is they need some sort of permission to actual receive the emails, based on re-doing the alert, asking them to check their spam/junk folders. (however if that is wrong, feel free to correct me).

Essentially, without getting into too much detail, I’m setting up an attendance log, and the goal is to have specific users from specific teams get an email alert when someone on their team calls out and is put down on the list, but we don’t want them to have full access to the list of people who have called out for the day

Unfortunately, I’m not at all involved with our Company IT, so I’m sort of out of my element here, but I figure if I at least know this is possible, and how, I can kindly communicate that to our SP IT group for their assistance/rescue, hah.

Designing Workflow to Send Alerts Help please

I have a list app with columns: Title, Issue, Staff, Action. It needs to send an email to manager when a new title and issue is added. She then needs to modify the item to assign this to Staff to report on Action, then an email should be sent to the staff. The staff then modify the item to put in the action then an email will be sent to the manager.

I have done this:

Stage:Starting process Email Manager Set Workflow Status to Started If modified by Manager     Email Staff Set Workflow Status to Email to report on action taken sent to Staff If modified by Current Item:Staff     Email Manager Set Workflow Status to Action reported back by ward manager 

Transition to stage Go to End of Workflow

ANd it’s not working.

Any advise, please.

How to fix alerts in server manager for Tile Data model server service?

I manage a small collection of Windows Server 2016 Standard instances, virtualized on VMware and joined to an Active Directory domain. In Server Manager, I commonly see alerts on these servers for the “tiledatamodelsvc” having a status of Stopped.

My understanding is that this service backs Windows Live Tiles that can appear in the start menu. If I RDP into one of these servers, the service seems to wake up and the alert goes away. I don’t know exactly what causes this alert to come back, but my guess is it would be after a long period with no interactive logons (none of these servers are used for RDS). Or, speculating, perhaps if I never RDP’d into a server at all after reboot there would be a different behavior?

Do I need to investigate further as to why this service gets stopped, or is this normal? And if so, what is the best practice for preventing this kind of alert? I believe I could set a GPO to disable this service, or I could use the GPO administrative template for Start Menu and Taskbar to “Turn off tile notifications”, but as this is a User Configuration setting instead of a Computer Configuration one, I’m not sure that it would be applied.

Designing policy based alerts using AWS and micro services [on hold]

I am working on an application that involves alerts as well as alert/notification policies. A bit about my overall system design, I am using multiple small applications that server one job (micro services). So I have one service that will do the alerting and another that stores and handles policies. I am also using AWS services like SNS, SQS and SES.

Examples of all important JSON are below.

So about how it all works, my alerting service gets a message from an SQS queue that is produced by another service I have that does business logic. My alert service then asks the policy service (indirectly) what the policy is that is attached to the message coming from the alerts queue. This policy then contains the number of people attached to the policy as well as the user ID's of the users who are part of the policy and who should be alerted.

Parts I need to figure out:

  1. I need to look up all the notification settings for each user. Can't seem to figure out how best to loop through and send only the API calls I need to for each alert that needs to be send
  2. I need to figure out how to take the notification setting then alert the correct user via the proper notification method. This means that I want to go through each user and notify the correct user using their notification preferences. But I also want to escalate to the next user in line once escalate_time passes for each user.

I am using Node.js and am open to adding anything to the project to make it work how I want it to. I have full control of all the services mentioned since this is a new build project that I am doing solo. I should also add that I am open to most AWS services to help solve the problem.

I already know how I want to format the message weather that be via email/sms or a call. I have that part handled already.

What all that information what would you suggest I do to get the end result that I want?

Message gotten by alert service from alerts queue

{     "name":"Example website",     "policy":"5c8c39de9a77b60a09cadd5c",     "url":"example.com",     "code":500,     "errorMessage":{         "code":"ETIMEDOUT","connect":true     } } 

Example policy

{     id: objectID     companyID: '1',     name: 'Polciy name',     number_people: 3,     person_one: 'person_oneID',     person_two: 'person_twoID',     person_three: 'person_threeID',     escalate_time: 5, // Can be up to 30     createdAt: Date.now }  

Notification part of user model

Note: The values for method can be either email/sms/call and doesn't matter their order or how many are included past 1.

Note 2: The contact times can be any value of 10 or under.

notification: {    first_contact_method: email,     second_contact_method: sms,    third_contact_method: call,    first_contact_time: 1,    second_contact_time: 3,    third_contact_time: 10 } 

What are some good patterns for cleaning up noisy logging alerts

In addition to traditional logging from applications going into e.g. Elasticsearch, an organisation may have an alerting system “Sentry” that receives log messages/exception events sent by applications over HTTP, and notifies developers of potential problems.

Suppose that Sentry now contains not only “actionable” events (e.g. error connecting to the database. Devops should investigate), but has been polluted with a lot of “non-actionable” events (e.g. user input could not be processed – expecting the user to try again, nothing for devops to do).

What are some options for going from a system full of mixed good and bad event data, to a clean system with only good data so that the alerts become meaningful again and don’t get ignored?

Examples: 1) Gradually work through each event, starting with the low hanging fruit/most common events, deciding whether or not it’s actionable. 2) Create a new system and gradually transfer actionable events to it.

Multiple Google Calendar alerts from my personal email?

I have been receiving multiple emails for google calendar events sent to my gmail. This is not the same as the normal google calendar alerts. The normal alerts look like this:

normal image

The new ones look like this:

enter image description here

Can someone please help me get rid of the latter? I cannot unsubscribe. I’d like to filter them, but using “alert” and my email as a filter could have adverse effects (I’ve done a filtered search and seen emails I don’t want to auto-archive). I believe this somehow may have to do with my iCalendar (I used to get very similar emails from my iCloud account, which I filtered out).

Multiple Google Calendar alerts from my personal email?

I have been receiving multiple emails for google calendar events sent to my gmail. This is not the same as the normal google calendar alerts. The normal alerts look like this:

normal image

The new ones look like this:

enter image description here

Can someone please help me get rid of the latter? I cannot unsubscribe. I’d like to filter them, but using “alert” and my email as a filter could have adverse effects (I’ve done a filtered search and seen emails I don’t want to auto-archive). I believe this somehow may have to do with my iCalendar (I used to get very similar emails from my iCloud account, which I filtered out).

Snort rule for syn flood attacks – Limiting number of alerts

So I have a snort rule that detects syn flood attacks that looks like this:

alert tcp any any -> $ HOME_NET 80 (msg:”SYN Flood – SSH”; flags:S; flow: stateless; detection_filter: track by_dst, count 40, seconds 10; GID:1; sid:10000002; rev:001; classtype:attempted-dos;)

The problem is, when I trigger it using tcpreplay (With a Ddos.pcapng file):

sudo tcpreplay -i interface /home/Practicak/DDoS.pcapng

When listening on my VM1, and after running the TCP replay (on vm2), I get a lot of alerts on vm1 when listening with the snort rule active. E.G. 100s of Syn Flood Detected alerts.

How can I limit this so that I only get few / 1 alert for each Syn Flood that is initiated? I.E. using the TCPReplay with the pcap file.. & is this good practice to display less alerts?

Thanks

Python script that alerts me when specific file gets created

I wrote a small script that opens a window using PyQt when a specific file gets created.

This is to let me know when another program (that I have no control over) has finished calculating something. I know at the end of the calculation a specific file is created.

Here is the code:

from PyQt5.QtWidgets import QWidget, QLabel import os.path  file_to_check = "D:/test.txt"  alert_widget = QWidget() alert_widget.setWindowTitle("Program X finished") done_label = QLabel("Done",alert_widget) done_label.move(50,50)  file_does_not_exist = True  while file_does_not_exist:     if os.path.isfile(file_to_check):         file_does_not_exist = False  alert_widget.show() 

The widget that gets shown luckily just pops up in front of whatever window im currently active in, whether it’s browsing the internet or working on an excel sheet. That is intended behaviour, but I did not write it in by specifying it as a “top level” window or somehting.

The script is started in an IPython console in Spyder and just left to run there. It works exactly as expected.

Now my questions

  • is this constantly running loop good practice?
  • are there any downsides to having that loop constantly running in the background?
  • could it block other programs?
  • is there another, more elegant way to achieve what I’m looking to do?
  • is the code following python coding conventions?