Introducing GrindLists.com – SuperCharge Your GSA SER With Fresh Lists Daily

Dear linkbuilders,

We’ve all been there. You buy a link list. You download it. You’re excited. You start hitting it hard. Things are good. Verified links piling up.

At first anyway…

Then it slows down. Pretty soon crawling. Now you’re right clicking view target urls remaining because you think it must have run out of targets…and there’s 122k left.

Why? Because these link lists that promise only “50 COPIES TOTAL” are selling your exact list to 49 other people. And you’re ALL hammering that same exact list against multiple projects.

Even if that list really had 200k freshly verified link drops (lettuce be cereal, it didn’t…when was the last time you got close to the advertised numbers? Yeah, me too…), by the time you and the crew get done pummelling it, half the sites are down from excessive bandwidth or suffering what is basically a low level DDOS from the hum of a hundred GSA installs plowing into it with 300 threads each.

And then when those webmasters who had the misfortune of ending up on one of those lists get done cleaning up the damage, deleting all your links, well hello negative link velocity. And the corresponding rank drops that go with it.

Don’t believe me? Go run a list you bought a month ago. I’ll bring the tissues and a cup of warm milk, you’re going to need something make you feel better after seeing those numbers.

Alternatively…

You don’t buy lists. You know better. < Insert 50 monkeys and the same football cliche here>. Not going to end well.

Instead, you set up servers. Multiple. You install Hrefer. Scrapebox. GScraper.

You order proxies. Hundreds of them. You optimize your scraping to match threads to proxies, so you don’t burn them out and have to get new ones after they’re blocked.

Now you have millions of urls.

Time to process them.

Copying. Pasting. Syncing to dropbox.

And then it’s the age old question: do I just hammer these lists with GSA trying to post to them, hoping I don’t burn down my servers and proxies with spam complaints?

Or do I take the time to pre-process them with GSA? Which takes literally FOR.EV.ER

Either way, days are going by while you wrassle data.

Other projects are getting neglected…

Sites are not being built.

PBNs are not being updated.

High quality tier 1 links that actually rank sites post penguin getting created? Ain’t nobody got time for that…

Split testing your existing traffic? That’s a dream. An activity reserved for someone who actually has time.

All because running automation software like GSA is actually really time consuming.

Until now…

Introducing GrindLists.

GrindLists puts an end to buying lists and getting the same links as everyone else.

GrindLists puts an end to wasted hours spent scraping and filtering your own link lists.

GrindLists sticks a pipe out into the internet and diverts a highly filtered and qualified stream of links straight into your GSA installs.

GrindLists automates fresh link discovery and it does it almost entirely hands free. No two feeds are alike. No repetitive hammering on the same link sets as your competition.

And it takes less than two minutes to set it up. A set up you only have to do one time.

Watch the video on the home page again to see me set it up in less than two minutes. Literally.

Nothing to install. No giving up one of your file folders. You do you want with your setup, we just give you an access point to tie into and use how you want.

Ever wonder what SEO would be like if you could have GSA work like it used to work, back in 2012? Back before all the easy to find targets got ruined?

You can find out. You can experience that. You can experience results like these:

Stats from a week running the verified & identified feeds on one of my user projects testing our public system (notice where I plugged in the Identified feed?):

image


This site has been ranking in this range for months and months, since early summer. Good tier 1 links buffered by tiers of GrindLists.

image

Need to build good links to your PBN but you don’t know how? PR3+ contextual low OBL dripped in over 2 days running only the identified list.

image

And you can do it all relatively hands off.

Because when you’re not babysitting GSA and importing lists all day, you can go out and do all the important tasks that you need to do to be a successful SEO in this era.

Build good sites.
Build good pumper nets.
Acquire good tier 1 links.
You should probably optimize your traffic too, little changes go a long way there.

But I digress….back to GrindLists.

If you’re playing the automated SEO game, you’re probably using GSA Search Engine Ranker.

If you’re using GSA and you’re not using GrindLists, you’re on the bottom looking up.

Because your competition is. I’ve been using it since summer. Others have had access for a month now.

When we sell out our capacity, we’ll be done. There’s only X amount of X any system can do.

If you miss out now, you might never get a second a chance.

None of my beta testers and early subscribers are giving up their slots. They’re actually begging me to stop selling it.

But I haven’t changed the game for awhile. Now is the time.

It’s up to you to pick which side you want to be on. I’d suggest the winning team. If you chose wisely, I’ll see you on the other side.

– Grind

P.S. I’ll give three 48 hour review copies. To qualify you must 1) set it up and run links within ONE hour. You must post a screenshot of your results (verified/identified only is fine) within 12 hours. And you must
update the screenshot at the end of your 48 hour review.

FAQ

How many links do I get?

You get two feeds. One is verified links we’ve built within the last 24 hours. The other is identified links in the same time frame. Both feeds have a ton of good links.

The small plan has 2000 verified links pumped to your feed daily and 10,000 identified links pumped in every day.

The big plan has 5000 verified links pumped to your feed daily and 50,000 identified links pumped in every day.

You can use one feed. You can use the other feed. You can use both feeds (I do!).

Am I guaranteed 2000 and 5000 verified links every day I run it?

No. Everyones settings/servers/etc are different.

The verified links were built with GSA on big servers running captcha breaker only, all platforms.

Your servers/setups are probably different. You will probably have different results.

We will guarantee this. Your feed will get a minimum of 2000 or 5000 verified links uploaded to it every 24 hours.

They will not be filtered in any way. Filter the pipe to suit your needs after it hits your GSA.

One size fits all because you’re doing the final fitting to your specs.

And run the Identified feed, please. We identify with custom Python bots that do a really good job. It’s got some trash but there’s so much gold in there. And since the Verified lists are only using SER CB, you can get great links from the Identified feed with some different settings with regards to captchas.

image

I meant to only run 1 link per domain on there but I wasn’t paying attention. But still…if it was set up right, 78 PR3+ contextual links on unique domains in 2 days? On autopilot? Yes please…

How does your contextual success rate compare with other lists.

Good question. I’ve had a couple guys reviewing it for me that are frequent buyers of all the list services.

They are blown away. We had more contextuals verified in 36 hours than they got off a popular list. And we fed them ~15k links, while they had to run ~200k on the comparison list.

Then I told them that I’m not running any captcha services and it’s all done with GSA Captcha Breaker software. 🙂

Imagine if you ran the identified list with paid captcha service and a good Re-Captcha OCR. 😉

How much does it cost?

$ 67/month and $ 147/month, respectively.

How do I set it up? Your video sucked.

Guide

How Do I Signup?

http://grindlists.com

More FAQ

How can wkhtmltopdf be used without introducing a security vulnerability?

Background

Per the project website, wkhtmltopdf is a "command line tool to render HTML into PDF using the Qt WebKit rendering engine. It runs entirely "headless" and does not require a display or display service."

The website also states that "Qt 4 (which wkhtmltopdf uses) hasn’t been supported since 2015, the WebKit in it hasn’t been updated since 2012."

And finally, it makes the recommendation "Do not use wkhtmltopdf with any untrusted HTML – be sure to sanitize any user-supplied HTML/JS, otherwise it can lead to complete takeover of the server it is running on!"


Context

My intention is to provide wkhtmltopdf as part of an application to be installed on a Windows computer. This may or may not be relevant to the question.


Qualifiers / Additional Information

  • A flag is provided by wkhtmltopdf to disable JavaScript (–disable-javascript). This question assumes that this flag functions correctly and thus will count all <script> tags as benign. They are of no concern.
  • This question is not related to the invocation of wkhtmltopdf – the source HTML will be provided via a file (not the CLI / STDIN) and the actual command to run wkhtmltopdf has no chance of being vulnerable.
  • Specifically, this question relates to "untrusted HTML" and "sanitize any user-supplied HTML/JS".
  • Any malicious user that is able to send "untrusted" HTML to this application will not receive the resultant PDF back. That PDF will only temporarily exist for the purpose of printing and then be immediately deleted.
  • Even someone with 100% working knowledge of all of the wkhtmltopdf/webkit/qt source code cannot concretely state that they have zero vulnerabilities or how to safeguard against unknown vulnerabilities. This question is not seeking guarantees, just an understanding of the known approaches to compromising this or similar software.

Questions

What is the goal of sanitization in this context? Is the goal to guard against unexpected external resources? (e.g. <iframe>, <img>, <link> tags). Or are there entirely different classes of vulnerabilities that we can’t even safely enumerate? For instance, IE6 could be crashed with a simple line of HTML/CSS… could some buffer overflow exist that causes this old version of WebKit to be vulnerable to code injection?

What method of sanitizing should be employed? Should we whitelist HTML tags/attributes and CSS properties/values? Should we remove all references to external URI protocols (http, https, ftp, etc.)?

Does rendering of images in general provide an attack surface? Perhaps the document contains an inline/data-uri image whose contents are somehow malicious but this cannot reasonably be detected by an application whose scope is to simply trade HTML for a rendered PDF. Do images need to be disabled entirely to safely use wkhtmltopdf?

Adobe Photoshop has killed my creativity by introducing so much anti-privacy, anti-security, and dumbed-down bloat [closed]

I could pick so many angles to this, but I will focus on a very famous piece of software which has been utterly ruined in recent years: Adobe Photoshop. This is not a rant, but related both to software design and psychology.

These days, it is not possible to install it on a PC without going through a program which they call “Adobe Creative Cloud”. It’s sort of a software “hub” from which you download/manage the actual software you want (Photoshop, Illustrator, etc.). It demands that you register an account and log in to use it; if you don’t, you simply cannot install or use Photoshop or any other Adobe software. (Other than some ancient, outdated copy you might still have as a physical box on the shelf.)

Just having that “Creative Cloud” stuff on my computer in the first place gives me the creeps. On top of requiring you to have an Adobe account and log in to it, it aggressively installs itself as a difficult-to-remove icon in File Explorer, and makes itself very “known” overall, with frequent “update notifications” which have been engineered to be difficult to turn off. All you really want as a user is a simple icon that says “Photoshop” which opens Photoshop and nothing else when you click it. There is no benefit to the user to have this intermediate software. Any updates to Photoshop could easily be checked and handled regularly by Photoshop itself. (But even then, I don’t want random updates all the time, anyway.)

When you start Photoshop these days, it shows you this screen:

screenshot

The sheer idea that Photoshop even has the technical ability to transfer my project files (or finished files) away from my computer within itself is deeply unsettling to me, even though they give you this choice/prompt. There’s also something uncomfortable about the language they use, almost as to suggest that “you can (big smile) upload all your private files to our computers”, but (with a stern look) “for all you annoyingly privacy-conscious, un-hip, stone-age neanderthals, I guess we’ll also allow you to save your stupid files onto your stupid computer… maybe. For now. But you lose this and that benefit and remember that the Cloud is perfectly safe and you are really a very stupid and uncool person if you still want control over your files when we at Adobe could Cloud-store them for you instead in a far superior and more convenient manner. And don’t forget how convenient it is to go for the Cloud route and how uncool and old you seem if you pick this other, lame option.”

That’s how I interpret these “nudges” that all software these days seem to use to push the users into a completely insane situation where they store their private files on somebody else’s computer. This is not just about me and my personal situation; it makes my skin crawl to think of all the people out there who simply don’t have a proper understanding about privacy and security (nor could they be expected to), who are constantly being pushed into this extremely scary direction.

Again, even if they never will move to storing your private files by default in Adobe’s close (which is coming… believe me!), just the fact that the program has the technical ability to do this, and could be doing it if I don’t tread very carefully and vacuum all the settings for obscure little checkboxes which make it possible to turn this off, but 99.99% of all users won’t ever know it’s a setting, nor understand why they should go out of their way to disable it…

Over the years, as Photoshop has gotten worse and worse, not just in terms of privacy-destroying misfeatures, but also in terms of sheer “dumbing down” and bloat, for example, those auto-playing pop-up videos showing you how to use the basic tools, I have evaluated numerous alternatives. They are, I regret to inform, all absolute trash. There is no real comparison whatsoever. Things which just intuitively feel natural in Photoshop (most of its features, I just found out by using it and stumbling upon things which just made sense), are completely gone from those so-called “alternatives”. So, in practice, there is just no alternative/choice.

Maybe it sounds silly, but I’ve seriously lost my creativity and will to use software or computers at all in later years, and it definitely cannot be solely attributed to “unrelated depression”. Modern software, made by “modern people”, is designed with a completely different mentality than that which I remember from the “roaring 1990s”, when the PC truly blossomed and looked extremely promising.

Ever since the turn of the millennia, the mentality has increasingly shifted from “making really polished and great software for great people” toward “constantly change everything around randomly for the sake of change while adding enormous amounts of bloat and spying with zero benefit to the user who we deeply disrespect”.

Staying with old versions of software is impossible for obvious security reasons, but also practical ones. Some good new things are introduced, but get drowned in unwanted software cancer.

I no longer feel like having Photoshop on my computer, because it comes with all this garbage, and it increasingly feels like I’m using a dumb terminal and creating stuff “in the Adobe cloud” rather than on my machine. It feels like, at any given moment, that private image I’m editing might fly right away to some computer somewhere.

Maintaining a dedicated virtual machine with another costly Windows 10 license only to turn off its virtual network card seems like insanity. It’s just not practical or affordable. (Those evaluation copies of Windows 10 only work for 90 days and the rearm stuff never works.)

Basically, even if you can figure out some kind of “trick” to work around this, the fact remains that I’m drained of all my energy and creativity just knowing what they are doing. I want to feel like I’m using a fully “offline”, professional-grade, robust, industrial application — not some kind of toy for babies. I thought they had special editions for consumers, but now, even the “real” Photoshop looks ridiculous visually.

Password checking resistant to GPU attacks and leaked password files without introducing a DoS attack on the server?

In very old time the passwords were stored in clear text. This made it trivial for an attacker to log in if he had access to a leaked password file.

Later, passwords were hashed once and the hashed value stored. If the attacker had a leaked password file he could try hashing guesses and if a hash value matched, use that guess to login.

Then passwords were salted and hashed thousands of times on the server and the salt and the resulting hash value was stored. If the attacker had a leaked password file he could use specialized ASICs to hash guesses and if a guess matched use that password to login.

Can we do better than that?

Can we make password guessing of an attacker so hard that even if he has the hashed password file, he will not get a major advantage over testing the passwords against the server – even if he has specialized ASICs?

Introducing inconsistent controls: is it appropriate for the sake of usability?

We’re building a web application based on Material UI. Throughout the app there are select components, which behave like shown in the example below: The default label informs about the functionality of the select and when a value is selected, this label shrinks and moves up, so that it is still shown above the selected value.

standard use case

We use those components mainly for standard “organizational” bulk operations, such as sort, group etc. Therefore, no value is selected by default, the default label is shown and the user should know what the control is there for.

However, we also have a settings page (and forms), where there are already set values, like language. This leaves the select in the state where that informational default label has already shrinked to its smaller size (and would always stay that way, since a language can’t be unselected).

Because of this, I’d like to change the select component here, so that the label isn’t shown at all and instead introduce another easy-to-read label that is placed above, like shown on the picture below.

pre-select use case

I feel like this would be a good approach in terms of usability, making the controls easier to recognize and thus helping the user change their settings. (Imagine a multitude of settings and looking for a specific one to change).

However, it also introduces inconsistencies, since there would be two kinds of select throughout the application.

I’d like to know whether those kinds of incosistencies are acceptable for the sake of better usability. Do the benefits outweigh the possibility of irritating the user? Maybe you could provide any related research or real life examples of similar inconsistencies for the sake of usability. Maybe there’s even a way to quantify those “pains vs gains”?

Any input is greatly appreciated!

Introducing EPBN, The ultimate cost effective PBN link solution

Hi guys, this has been a long time coming but due to other ventures, official release of www.expresspbn.com was postponed until now! I have personally been testing the advantages and ranking improvements using this system and all I can say is it WORKS. You can use EPBN’s link sources for tier 1 links or for whatever needs you require! The system will be improved as time goes by, with customer feedback, this is the first revision of this system.

EPBN is a PBN public link sharing source ecosystem and also a private blog network manager (if you so choose to use this aspect of the system) You can set up and forget PBN campaigns to either the public listed domains within our pool or even your own private network domains, managing domains is key, EPBN is a cost effective way of obtaining high quality link sources to diversify your back link profile as well as strengthening your money sites, for just $ 50 per month, you’re able to post to 25 domains each month, that’s 25 PBN posts equaling to $ 2 per PBN post, think of the ranking power potential for a such a small investment, this should be your one stop PBN shop.

The current CMS platform is WordPress self hosted sites, best of all, it doesn’t require a WP Plugin on the site which means it keeps footprints to the minimum.

EPBN was made for SEOs in mind, that being said, there is a learning curve to how the system works and how you set up your campaigns but it’s very easy to learn and will be improved on as time goes by.

I am offering a 10% recurring discount for a limited time only so make sure you get a subscription before it ends. I am always happy to answer any questions so feel free to post one in this thread or PM directly.

Is it bad practice to have a helper package in go for testing purposes. Or is this introducing dependence’s

I find myself repeating the same code when writing unit tests, for example… When writing functions that work with files, In the setup for the test i often write some code to create a file (in a directory specified with an environment variable) populate it then after i have run the test I destroy the file and folder.

To me this seems the correct way of doing things as the test is completely independent of anything other than an environment variable which can easily be set on multiple os’s.

However I am clearly violating DRY principle by writing this file creating code every time, so I thought I could make a helper package that simplifies this, however i feel that would mean the test would become “dependant” on the helper package.

So the questions are.

  1. In this situation should the DRY principle be violated so as to avoid unnecessary dependence’s?

  2. It’s ok to create a helper package as long as it can be imported from an external location like git hub?

    1. Is there another approach (perhaps using dependency injection)?

Would a blitz round of D&D be a good way of introducing new players to D&D

I was shown a while ago a new way of playing D&D where the DM gives you a random character and has you play a quick game of D&D. I have used blitz rounds to help more experienced players deal with harder challenges and to help me to be a better DM. I was wondering if using a blitz round to teach new players how to play D&D would actually help them or would it make things to complicated. I personally think it would be OK because it easy to set up.

Would introducing a healing wand break the game?

I’ve been perusing the magic items for D&D 5e in the DMG, and I’ve only found two non-consumable items that restore hitpoints: Staff of Healing, and Rod of Resurection both of which are Rare or better. I was surprised that there were no healing wands, as that was something of a core trope in previous editions of the game. Would it significantly affect the core game if the players had access to a Wand of Cure Wounds, that let the players cast Cure Wounds like the Wand of Magic Missile allows them to cast Magic Missle? Is there any evidence that the lack of a Cure Wounds items at lower rarity was an intentional design decision?


I’m considering such an item to balance a lack of core healers in my current party.