代码之家  ›  专栏  ›  技术社区  ›  SY9

刮:已爬网并刮取0个项目

  •  0
  • SY9  · 技术社区  · 6 年前

    我正试图从政府网站上获取公司信息,并使用Scrapy。我的蜘蛛代码如下。

    Spider代码

    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider
    from ..items import CompaniesHouseItem
    
    class SpendolaterSpider(scrapy.Spider):
        name = 'spendolater'
        allowed_domains = ['beta.companieshouse.gov.uk']
        start_url = ['https://beta.companieshouse.gov.uk/company/10511127']
    
        custom_settings = {"DOWNLOAD_DELAY": 1,}
    
        def crawling(self, response):
            domain = "https://beta.companieshouse.gov.uk/company/"
            for url in response.css("a::attr('href')").extract():
                if not url.startswith('https://'):
                    continue 
                if domain not in url:
                    yield scrapy.Request(url, callback=self.parse)
                yield scrapy.Request(url, callback=self.parse_dir_contents)
    
        def parse_item(self, response):
            for contents in response.xpath('//*[@id="page-container"]'):
                item = CompaniesHouseItem()
                item["name"] = response.xpath('//*[@id="company-name"]').extract()
                item["location"] = response.xpath('//*[@id="content-container"]/dl/dd').extract()
                item['foundation'] = response.xpath('//*[@id="company-creation-date"]').extract()
                items['type'] = response.xpath('//*[@id="company-type"]').extract()
                items['SIC'] = response.xpath('//*[@id="sic0"]').extract()
                yield item
    

    它在运行时不会显示任何错误,但不会提取任何信息。 运行后,命令行中将显示“爬网0页(以0页/分钟的速度)、刮取0项(以0项/分钟的速度)”消息。

    “items.py”文件如下

    项目。py公司

    import scrapy
    
    class CompaniesHouseItem(scrapy.Item):
        name = scrapy.Field()
        location = scrapy.Field()
        foundation = scrapy.Field()
        type = scrapy.Field()
        SIC = scrapy.Field()
    

    输出如下。

    输出

    2018-03-14 17:51:56 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: companies_house)
    2018-03-14 17:51:56 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.9.0, Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g  2 Nov 2017), cryptography 2.1.4, Platform Windows-10-10.0.16299-SP0
    2018-03-14 17:51:56 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'companies_house', 'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'companies_house.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['companies_house.spiders']}
    2018-03-14 17:51:57 [scrapy.middleware] INFO: Enabled extensions:
    ['scrapy.extensions.corestats.CoreStats',
     'scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.logstats.LogStats']
    2018-03-14 17:51:57 [scrapy.middleware] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
     'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2018-03-14 17:51:57 [scrapy.middleware] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2018-03-14 17:51:57 [scrapy.middleware] INFO: Enabled item pipelines:
    []
    2018-03-14 17:51:57 [scrapy.core.engine] INFO: Spider opened
    2018-03-14 17:51:57 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2018-03-14 17:51:57 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
    2018-03-14 17:51:57 [scrapy.core.engine] INFO: Closing spider (finished)
    2018-03-14 17:51:57 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
    {'finish_reason': 'finished',
     'finish_time': datetime.datetime(2018, 3, 14, 8, 51, 57, 239817),
     'log_count/DEBUG': 1,
     'log_count/INFO': 7,
     'start_time': datetime.datetime(2018, 3, 14, 8, 51, 57, 231826)}
    2018-03-14 17:51:57 [scrapy.core.engine] INFO: Spider closed (finished)
    

    如有任何建议,将不胜感激。提前谢谢。

    2 回复  |  直到 4 年前
        1
  •  1
  •   Lore    6 年前

    因为Scrapy在默认情况下,读取要刮入的第一个地址 start_urls (不是 start_url )并开始解析 parse 方法(非 crawling )。尝试重命名操作并重新启动爬行器。

        2
  •  0
  •   Umair Ayub    6 年前

    你没有 def start_requets(self) 但是 start_url 所以Scrapy将从列表中删除URL start\u url 并使用回调方法 parse

    我是说,你失踪了 def parse(self, response) 改变 def crawling(self, response) def解析(自我,响应)

    另外,你的代码的逻辑是完全错误的,只需在编写代码之前考虑代码的流程。

    将页面放入 start\u url 它有公司链接,我是说上市页面

    然后创建 def解析(自我,响应) 并创建一个for循环来迭代每个公司链接。