使用Python对淘宝商品并分析
1.数据获取和保存
由于近年来淘宝的反爬措施逐渐完善,爬取难度变大,在爬取时必须要登录之后才能查看相关的商品信息,淘宝数据是通过动态加载的方式显示的,所以本文使用selenium模拟浏览器操作爬取商品页详情信息。
需要提取安装和selenuim和浏览器驱动chromedriver,由于chorme浏览器的自动更新,所以导致我的chrome浏览器版本和chromedriver版本不一致,所以使用了
chromedriver_path=r’C:\ProgramFiles\Google\Chrome\Application\chromedriver.exe’
browser = webdriver.Chrome(executable_path=chromedriver_path) 的方法成功加载了浏览器,在爬取淘宝美食的时候需要手动扫码等陆才可以保证爬取的顺利进行,最终成功爬取2733条记录。
import re
import time
import pymongo
from selenium import webdriver
from pyquery import PyQuery as pq
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
KEYWORD ='美食'
MONGO_TABLE = 'meishi'
chromedriver_path=r'C:\Program Files\Google\Chrome\Application\chromedriver.exe'
browser = webdriver.Chrome(executable_path=chromedriver_path)
wait = WebDriverWait(browser, 10)
client = pymongo.MongoClient('localhost',27017)
cilen = client['taobao']
db = cilen['food']
def search_page():
print('正在搜索')
try:
browser.get('https://www.taobao.com')
input = wait.until(
EC.presence_of_element_located((By.CSS_SELECTOR, "#q"))
)
submit = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#J_TSearchForm > div.search-button > button')))
input.send_keys(KEYWORD)
submit.click()
total = wait.until(EC.presence_of_element_located(