hohahahahah 2017-07-22 15:42 采纳率: 55.6%
浏览 957
已采纳

scrapy1.4.0版本保存数据为JSON格式的疑问

spider的代码

 from textsc.items import TextscItem
from scrapy.selector import Selector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class Baispider(CrawlSpider):
    name = "Baidu"
    allowed_domains = ["baidu.com"]
    start_urls = [
        "https://zhidao.baidu.com/list"
        ]
    rules = (
        Rule(LinkExtractor(allow=('/shop', ), deny=('fr', )), callback='parse_item'),
    )
    def parse_item(self, response):
        sel= Selector(response)
        items=[]
        item=TextscItem()
        title=sel.xpath('//div[@class="shop-menu"]/ul/li/a/text()').extract()
        for i in title:
            items.append(i)
        item['TitleName'] = items
        print (item['TitleName'])
        return item

item.py的代码


import scrapy
import json
class TextscItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    TitleName = scrapy.Field()
    pass

运行时输入了

 scrapy crawl Bai -o items.json -t json  

运行时很正常 没报错
但是运行后点击查看了 items.json文件
什么都没有

求解决方法
谢过.

  • 写回答

1条回答 默认 最新

  • devmiao 2017-07-23 14:43
    关注
    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?