Where can I find the official hash file for my Apache Netbeans download?

So, as prompted by the Apache download site, I have generated a hash for my new NetBeans download. It STRONGLY encourages me to compare my generated hash with their official hash file.

To check a hash, you have to compute the proper checksum of the file you just downloaded ; then compare it with the published checksum of the original.

Well I generated my own, but it doesn’t appear that they actually have an official published checksum for NetBeans. Should I be worried?

How can I maintain and automate a list of download URLs with known hashes?

I’m familiar with the concept of downloading a file and manually checking it against a published checksum. See How to verify the checksum of a downloaded file (pgp, sha, etc.)? and Is there a command line method by which I can check whether a downloaded file is complete or broken? for examples.

I would now like to maintain my own list of target URLs and expected checksums, something like:

https://example.com/file.tar.gz, SHA123456... https://example.org/list.txt, SHA1A1A1A... 

…and automate the download-and-check process.

Before I make the mistake of hacking my own solution, is there an established way to do this on a Debian-based distro?

Yahoo groups is going away: can I use Mathematica to download the old messages?

It looks like all the old email messages from my group are stored in web pages with URLs that look like this:

page="https://groups.yahoo.com/neo/groups/arco-75/conversation/messages/19291"] 

That’s email number 19291, and the others have the same form but different numbers at the end. What I am hoping to do is to grab all the old messages. The problem is that:

ans = Import[page] 

returns stuff that begins with “Sorry, an error occurred while loading the content.” From the look of it, I’m guess that the problem is that my Import statement is not “logged in” to the website, and so it is rejecting the request. Does anyone know how to “log in” to the yahoo site (to enable downloading of the old emails)?

Bulk download files from AWS server

I would like to download all the files for a MOOC course. I believe the directory the files are in has private access, while the files have public access.

The server is an AWS server so I understand they have a CLI.

So what would the approach be, on Linux/Ubuntu, for downloading the files with the AWS CLI?

but secondly, can this be done with just a bash script?

Is there a way to print all the files in a directory if you only have access to the files?

I don’t need the full script, just a workaround to get a list of all files in the directory: https://s3.amazonaws.com/edx-course-spdx-kiczales/HTC/

should look something like this:

more-arithmetic-expression-starter.rkt more-arithmetic-expression-solution.rkt tile-starter.rkt tile-solution.rkt compare-images-starter.rkt compare-images-solution.rkt more-foo-evaluation-starter.rkt ... 

Anyway, I already went through manually and found each name of the file in the directory, but since I might complete more courses and they will more often than not host material files in the same manner, I would love if there was a bulk approach to downloading files from a (AWS) server.

thanks for your help

Cannot download Office files in SharePoint browser

I have a user whose permission level to a certain subsite is customized.

I’m currently investigating about user permissions and I found out that the minimum permission in order to download a file in SharePoint browser is the View items permission.

However, downloading Office files (like docx, xlsx, pptx) gives me an Unauthorized Access error, so I investigated again and I found out that it needs the Open Items permission together with the View Items permission in order to download any kind of files (including Office files).

My question is: What is in the Open Items permission that make the Unauthorized Access error not occur in downloading Office files?

Or what is the Office files that makes an error when permission is only View Items?