Why does THC Hydra and Medusa give false positives when used on TP-Link Netcam?

I am a university student who is doing a final year project on IoT device security within an isolated network.

One of the tests I carried out was brute forcing, I already knew what the username and password was for a factory resetted IP Netcam but wanted to see how it would work in practice and if it even worked on IoT devices.

The commands I used for both tools is as follows:

Medusa -h “IP address” -u “default login” -P Desktop/rockyou.txt -n 80 -M http

Hydra -l “default login” -P Desktop/rockyou.txt -e ns -f -V “Ip address” http-get

Hydra did seem to work fine on other devices and would attempt to go through the entire list. But for this TP-Link Netcam it seemed that both tools would just go partially through the lists and sometimes give multiple false positives within the few attempts made.

While I do not have access to these devices anymore to continue testing, I would atleast like to know if it was something I entered wrong? Or if the device has something that could potentially stop this?

Any insight would be greatly appreciated, thank you for your time.

Reduce form spam without external dependencies or false positives

I’m working on a Java Spring with a team and have been facing form spam issues. We are seeing a large number of requests that use generated & falsified information (ie everyone’s names are generic, emails follow the same syntax, birthdays are 1/1 of various years, and all forms are completed in the same amount of time). Due to the uniform nature of these requests, I can reliably assume automation is being used.

Without introducing additional external dependencies into the project (eg Captcha), I would like to set up a system that can make automated form submissions as difficult as possible. My current approach would utilize five different approaches in tandem.

  • Measuring keystroke events per form/field
  • Hidden honeypot fields
  • Measure mouse events per page
  • Server-side timing
  • IP blocking

The first three would obviously be front-end and generally focus on determining how “human” the user is behaving. The latter two are to determine if the user is coming from a flagged IP or is submitting data repeatedly and within the same amount of time as their last submission. The nature of the application allows and encourages the same user to enter and submit data more than once, but the vast majority of users don’t submit more than a few times.

My number one priority is reducing spam, with a close number two being the minimization of false positives. Each of the five elements wouldn’t be an absolute ruling, but if two (or more) of these flags is triggered, the user is flagged and booted.

With all this said and done, how can I improve or build upon this approach? Are there any particular Spring libraries I could use? I am primarily a JavaScript dev in a Java project, hence my focus on front-end behavioral analysis.