Facebook marketplace-competitors keep reporting me

ok, lets Start. My name is Lana:)

I have been selling products on facebook marketplace for 8 months. It is going great.
Last week I noticed that my listings are not visible but everything else looked normal.
After 3 days FB unblocked me and my listings were public for 1 hour.

Then the same thing happened again but this time I was blocked for 1 day.

I noticed I have competitors (same products) and they have more than 20 fake accounts with listings.
Then I got report from FB that someone…

Facebook marketplace-competitors keep reporting me

Gedit in Ubuntu in VM reporting file changed on disk

Note: I heard that asking about old version is off topic here. But I’m using ubuntu 14.04 which is required by my build environment.

The question is very similar to this one: https://superuser.com/questions/298577/gedit-in-vmware-always-says-file-has-been-modified-since-opening-when-i-save, may be the two have the same cause, but the phenomenon is not exactly the same.

Note: please don’t suggest me to use a newer version of Ubuntu, some of the software I use need 14.04.

I’m using Ubuntu 14.04.4 in VMWare workstation 12.1.1 under Windows 7 x64 and now VMWare workstation 15.1.0 under Windows 10 x64 which makes no difference. I mount a shared folder which is /mnt/hgfs/share to share file with the Windows host easily, and I create and edit a text file like /mnt/hgfs/share/docs/1.txt using GEdit 3.10.4. The shared folder is created with VMWare Workstaion’s GUI under Windows.

Then constantly after a while it will pop up a message “The file “/mnt/hgfs/share/docs/1.txt” changed on disk”. I’m not sure under what condition the message will show up, but I’m sure sometimes I’m not editing the file outside Gedit and sometimes it shows up many times even I’m not editing it in GEdit. The date/time method in the above question doesn’t work.

The more serious problem is that often when I save the file, it says it’s been modified outside and persuades me to drop the edit.

SQL Server Reporting Services Report Viewer – Error Message

We are trying to add the SQL Server Reporting Services Report Viewer webpart to a page, but get the following error:

“An error occurred during local report processing. For more information about this error navigate to the report server on the local server machine, or enable remote errors”

We are running an old version of Sharepoint – I believe it’s Sharepoint 2010.

Reporting service

Ya se generar reportes por medio de consultas o sp en sql server y así mismo generar los campos y las variables todo dentro de un informe funcional

alguno sabe la forma de que reporting service genere el informe con los campos automáticamente con solo indicarle la table?

es decir si tengo la tabla productos reporting service me realize el reporte de todos los campos de esta tabla y si tengo otra tabla de proveedores realize lo mismo y así sucesivamente

GA reporting shows sudden drop indirect traffic

For a few months, our Direct traffic has been increasingly getting higher and higher. Which is very random nothing has changed, no new services etc. Recently we switched from GA embedded code to using GTM and loading in our Analytics script via the tag manager. All seemed to be ok except that the direct traffic dropped overnight from about 2,500 sessions to about 150 sessions. The GTM code implementation looks ok, Google tag assistant confirms that it is installed and running, and by viewing real-time reports I can see my page views. This drop also happened at the time of the GTM code install on the site.

Has anyone experienced an issue like this before?

Why would you populate a temp table for reporting?

I’m maintaining a legacy accounting application that runs nearly all of its reports by populating a temp table at runtime. The temp table is dropped on a daily basis and includes information about the report’s configuration (i.e. which terminal ran it, landscape vs. portrait etc…). Is there a reason to have the application do this rather than select the configuration view in real time and pull the data it needs from a custom view?

Is there a document or materials for me to understand reporting hierarchy within the organization?

I am designing a platform for CEO to compare reporting hierarchy from last year to today so he can see the employee changes in the company.

Here is the use case..

MJ is a CEO at ACME. ACME recently went through massive changes in the internal organizational Structure. Some employees were promoted, some had left, some were hired, some moved Departments, while others remained in the same position. MJ wants to compare the reporting hierarchy from last year to today so he can see the employee changes in the company.

Acceptance Criteria • Side-by-side comparison view of the two hierarchies to easily compare. • Have a way for the user to quickly know the changes between the two hierarchies: promotions, Resignation, new hires. • Have a way for the user to search for an employee and view its location within both hierarchies.

Non-functional requirements • ACME currently has 2000+ employees • ACME’s reporting structure is at most 5 levels deep

Please let me know if there are any similar concept available in the industry or you guys can feed me in with your thoughts..

NMAP discovery scan reporting host offline, pinging the same host gets ICMP responses

I ran an nmap -sn scan on a host, and nmap reported the host as down. I then pinged the same host with ping and got ICMP responses. I’m confused, because I was sure that -sn among other things, did an ICMP echo request.

Output from my two commands:

~ $   nmap -sn 192.168.1.237   Starting Nmap 6.40 ( http://nmap.org ) at 2016-08-16 09:35 BST Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.00 seconds  ~ $   ping 192.168.1.237 PING 192.168.1.237 (192.168.1.237) 56(84) bytes of data. 64 bytes from 192.168.1.237: icmp_seq=1 ttl=128 time=9.82 ms 64 bytes from 192.168.1.237: icmp_seq=2 ttl=128 time=5.25 ms 64 bytes from 192.168.1.237: icmp_seq=3 ttl=128 time=2.95 ms 64 bytes from 192.168.1.237: icmp_seq=4 ttl=128 time=9.10 ms ^C --- 192.168.1.237 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 2.957/6.785/9.826/2.810 ms 

Any ideas why NMAP could be confused? I’m running the scan from my Ubuntu 16.04 box, the target is a Windows 10.