Scrapy doesnot yield website urls of DNS Lookup failed websites

I have a list of urls that gets redirected to another url in a text file. I want to get all redirected urls.So I did a spider that opens url from a text file. Now I get some error for few as “DNS look up failed” or “No route”. I checked those urls directly on browser and found those url get “ip address not found error”. However I want scrapy to every redirected urls regardless of the error. Any soulutions to achieve this?

Here is the spider I ran

import scrapy  class AwesomeSpiderSpider(scrapy.Spider):     name = 'web_uk' # opening the list of urls that gets redirected.     f = open("urls.txt")     start_urls = [url.strip() for url in f.readlines()]     f.close()       def parse(self,response):          item = {}         item['Web Address'] = response.url         yield item 

Here is the output

2019-07-04 03:02:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gatesidedevelopments.com/> (referer: None) ['cached'] 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://honka.com/gb/en/> {'Web Address': 'https://honka.com/gb/en/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.dapconstruction.co.uk/> {'Web Address': 'https://www.dapconstruction.co.uk/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.virtueprojects.com> {'Web Address': 'http://www.virtueprojects.com'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://waynemoore.com/> {'Web Address': 'https://waynemoore.com/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.avenuenorth.co.uk/> {'Web Address': 'http://www.avenuenorth.co.uk/'} 2019-07-04 03:02:03 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.galkivconstruction.co.uk/robots.txt> (failed 1 times): DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:03 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.mic.uk.com> {'Web Address': 'http://www.mic.uk.com'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.vlconstruction.co.uk/> {'Web Address': 'https://www.vlconstruction.co.uk/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.whitehalloflondon.co.uk> {'Web Address': 'http://www.whitehalloflondon.co.uk'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.vandthomes.com> {'Web Address': 'http://www.vandthomes.com'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.atlanticdwellings.com/> {'Web Address': 'https://www.atlanticdwellings.com/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.rgfhomeimprovements.co.uk> {'Web Address': 'http://www.rgfhomeimprovements.co.uk'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://leonoc.co.uk/> {'Web Address': 'https://leonoc.co.uk/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.home-refurbishments.co.uk/> {'Web Address': 'http://www.home-refurbishments.co.uk/'} 2019-07-04 03:02:03 [scrapy.core.scraper] DEBUG: Scraped from <200 https://gatesidedevelopments.com/> {'Web Address': 'https://gatesidedevelopments.com/'} 2019-07-04 03:02:07 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.rrawlins.co.uk/robots.txt> (failed 1 times): An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:07 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36 2019-07-04 03:02:11 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.galkivconstruction.co.uk/robots.txt> (failed 2 times): DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:11 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.45 Safari/537.36 2019-07-04 03:02:12 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.rrawlins.co.uk/robots.txt> (failed 2 times): An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:12 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36 2019-07-04 03:02:17 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.galkivconstruction.co.uk/robots.txt> (failed 3 times): DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:17 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http://www.galkivconstruction.co.uk/robots.txt>: DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. Traceback (most recent call last):   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks     result = result.throwExceptionIntoGenerator(g)   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator     return g.throw(self.type, self.value, self.tb)   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request     defer.returnValue((yield download_func(request=request,spider=spider)))   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks     current.result = callback(current.result, *args, **kw)   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/internet/endpoints.py", line 975, in startConnectionAttempts     "no results for hostname lookup: {}".format(self._hostStr) DNSLookupError: DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:17 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36 2019-07-04 03:02:19 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.rrawlins.co.uk/robots.txt> (failed 3 times): An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:19 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http://www.rrawlins.co.uk/robots.txt>: An error occurred while connecting: 113: No route to host. Traceback (most recent call last):   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request     defer.returnValue((yield download_func(request=request,spider=spider))) ConnectError: An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:19 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36 2019-07-04 03:02:22 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.rrawlins.co.uk/> (failed 1 times): An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:22 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36 2019-07-04 03:02:23 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.galkivconstruction.co.uk> (failed 1 times): DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:23 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31 2019-07-04 03:02:29 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.galkivconstruction.co.uk> (failed 2 times): DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:29 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36 2019-07-04 03:02:29 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.rrawlins.co.uk/> (failed 2 times): An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:29 [scrapy_user_agents.middlewares] DEBUG: Assigned User-Agent Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36 2019-07-04 03:02:34 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.galkivconstruction.co.uk> (failed 3 times): DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:34 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.galkivconstruction.co.uk> Traceback (most recent call last):   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks     result = result.throwExceptionIntoGenerator(g)   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator     return g.throw(self.type, self.value, self.tb)   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request     defer.returnValue((yield download_func(request=request,spider=spider)))   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks     current.result = callback(current.result, *args, **kw)   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/twisted/internet/endpoints.py", line 975, in startConnectionAttempts     "no results for hostname lookup: {}".format(self._hostStr) DNSLookupError: DNS lookup failed: no results for hostname lookup: www.galkivconstruction.co.uk. 2019-07-04 03:02:36 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.rrawlins.co.uk/> (failed 3 times): An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:36 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.rrawlins.co.uk/> Traceback (most recent call last):   File "/home/ubuntu/scrapy/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request     defer.returnValue((yield download_func(request=request,spider=spider))) ConnectError: An error occurred while connecting: 113: No route to host. 2019-07-04 03:02:36 [scrapy.core.engine] INFO: Closing spider (finished) 

Looking for source for a “yield space” rule my 3.5 group uses

I play D&D 3.5 with a group of folks. One of the rules they use in combat allows Alice, on Alice’s turn, to ask Bob to yield his space to Alice. Bob chooses a space to move to, Alice moves into Bob’s space.

I’ve tried to find a source for this rule and I’ve been unable to. I’ve looked in the PHB, the DMG, the Rules Compendium, the SRD, and I’m not finding anything. I have a strong suspicion that this may be a houserule that they’ve played with so long that it’s just part of the game for them. I’ve never heard of anything like this and would like some input on where this rule might have come from. Does anyone know of anything like this in a splat, is it from another game?

Is there any command line alternative to Activity Monitor that would yield the same results?

I know there’s top and htop commands, but they lack a lot of information. For example there is no way to list processes from Energy tab. Or is there at least some kind of API for Activity Monitor which could be used to develop the desired command line tool?

Can Plant Growth be repeatedly cast on the same area to exponentially increase the yield of harvests there (more than twice)?

I was looking into getting the spell Plant Growth because we work with some agrarian societies in our campaign.

If you cast the 8-hour version that enriches plants in a 1 mile radius to double their next harvest over the same 1 mile every day, what stops it from increasing the harvest exponentially by further enriching the crops?

The theory being that even using a weekend of casting we could give a city/farmer x8 the harvest instead of just x2.

yield new waitforsecond(1) counter time varying after ludo dice roll

in ludo 2d game I am using player turn timer 30 sec by using yield new waitforsecond(1) whenever timer goes to 0 and player don’t roll dice then dice will move to next player my problem is in starting timer work perfectly player chances skip if they don’t roll dice and timer work totally perfect for all 4 players until they roll dice but when dice roll I mean rolling animation start after timer start varying very fast whenever dice rolling and tits look like waitforsecond(1) depend on dice rolling and after some dice rolling time goes to 0 from 30 in few second and I want in 30 sec timer goes from 30 to 0.

here I have changed code two times

first with waitforsecond(1)

float currCountdownValue; public IEnumerator StartCountdown(float countdownValue = 30) { currCountdownValue = countdownValue; while (currCountdownValue > 0) { Debug.Log("Countdown: " + currCountdownValue); yield return new WaitForSeconds(1.0f); currCountdownValue--; }     if(!isdicerolled){     if(playerTurn == "RED"){ playerTurn = "BLUE";     InitializeDice();    }    else if(playerTurn == "BLUE"){     playerTurn = "GREEN";     InitializeDice();    }    else if(playerTurn == "GREEN"){     playerTurn = "YELLOW";     InitializeDice();    }   Ielse if(playerTurn == "YELLOW"){     playerTurn = "RED";     InitializeDice();    }    }   } 

and the second time I was using time.deltatime but it also giving me the same output

so what should I do now I want timer like teen Patti game, poker everything work but after rolling dice time is varying pls help me

Como o Python trata o comando “yield” internamente?

Estava lendo sobre o comando yield do Python, e me parece que este comando cria um generator que seria uma especie de lista de dados no qual o retorno do valor ocorre sobre demanda, como se o ultimo “estado” da interação fosse “memorizado” de alguma forma.

Para comprovar isso, veja esta função que retorna três letras:

def letras():     yield 'A'     yield 'B'     yield 'C' 

Chamando a função letras() num for para obter os dados:

for letra in letras():     print(letra) 

Perceba que a saída sera:

A B C 

Agora, se eu modificar a função letras() para ela incrementar o valor de v que é uma variavel global:

def letras():     global v     v += 1     print(v)     yield 'A'     yield 'B'     yield 'C' 

e a saída:

1 A B C 

Veja que v possui o valor 1, isso mostra que a função letras() não memorizou o estado de v, apenas os valores retornados no yield, é como se a função tivesse sido chamada uma única vez. Por consequência, eu ainda não consigo ver de forma clara como é o funcionamento do yield, pois o comportamento da função parece ter sido diferente do que eu esperava e me gerou mais confusão ainda acerca do yield, talvez entender como o Python lida com ele internamente pode ajudar.

Pergunta

Portanto, eu gostaria de saber como o Python trata o comando yield internamente? Ou qual estrutura ou mecanismo o yield usa?

Нужно ли в корутине писать yield два раза?

Я написал простейшую, в которой выводится сумма двух чисел:

#!/usr/bin/env python3  def gen():     nums = yield      yield(nums[0] + nums[1])  g = gen() g.send(None) try:     print(g.send((1, 2))) except:     print('----stop') 

как видите, в ней yield используется два раза.

Далее я попытался переписать корутину так чтобы yield использовался один раз:

def gen():     nums = yield nums[0] + nums[1]  g = gen() g.send(None) try:     print(g.send((1, 2))) except:     print('----stop') 

В результате получил в консоль следующее сообщение об ошибке:

File “./hello.py”, line 16, in gen nums = yield nums[0] + nums[1] UnboundLocalError: local variable ‘nums’ referenced before assignment

Скажите пожалуйста что сделано не так? Неужели чтобы вернуть один раз результат нужно использовать два раза yield, это нормально?

Я понимаю так:

  1. корутина создаётся

  2. при первом вызове в ней нужно передать None(в этом случае выполнение дойдёт ДО первого yield)

  3. при втором вызове я передаю числа в виде аргумента. в этом случае выполнение начнётся С первого yield и далее. То есть выполнится:

    yield nums[0] + nums[1]

это аналогично

return nums[0] + nums[1]  

Examples of Binary Functions that Yield Regular Graphs with Invertible Adjacency Matrix


Question:

What are, provided their existence, examples of functions $ f$ with the following properties:

\begin{align}f:& \ \mathbb{N}\times\mathbb{N}\ni(i,j)&\mapsto\ \beta\in\lbrace 0,1\rbrace\ &f(i,i)&=\quad\quad\quad\quad0\ &f(i,j)&=\quad\quad f(j,i)\ &\sum_{i=1}^{\infty}{f(i,j)}&=\quad\quad k\in\mathbb{N}\ &\sum_{j=1}^{\infty}{f(i,j)}&=\quad\quad k\in\mathbb{N}\ &F\in\lbrace0,1\rbrace^{n\times n},\ 1\le i,j \le n,\ F_{ij}=f(i,j)&\implies \ |F|\ne 0\end{align}
The calculation of $ f(i,j)$ must not depend on $ n$ but would ideally be parameterized by $ k\in\mathbb{N}$