代码中可能有多余的地方,自行修改,我也是东拼西凑的才实现想要的功能,目前只是在控制台输出,没有保存到文件或数据库中。
环境搭建:
-
python环境搭建
安装验证
C:\Users\mac>python
Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD
4)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
- 安装selenium
pip install selenium
- 验证是否安装成功:
pip show selenium
能够看到selenium的详细信息则安装成功
Name: selenium
Version: 3.13.0
Summary: Python bindings for Selenium
Home-page: https://github.com/SeleniumHQ/selenium/
Author: UNKNOWN
Author-email: UNKNOWN
License: Apache 2.0
Location: /usr/local/lib/python2.7/site-packages
Requires:
Required-by:
- 安装浏览器。。。
- 下载浏览器驱动(以chrome为例)
进入淘宝npm镜像网站,进入对应浏览器驱动页面
http://npm.taobao.org/
下载对应浏览器版本的驱动 - 解压后放到环境变量位置即可(驱动同时放到google浏览器安装文件夹一份)
完整代码:
import time from selenium import webdriver import selenium.webdriver.support.ui as ui options = webdriver.ChromeOptions() options.add_argument("service_args=['–ignore-ssl-errors=true','–ssl-protocol=TLSv1']") # options.add_argument(“service_args=['–ignore-ssl-errors=true','–ssl-protocol=TLSv1']) options.add_experimental_option('excludeSwitches', ['enable-automation']) driver=webdriver.Chrome(r"X:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe") driver.get("https://weibo.com/login.php") driver.maximize_window() time.sleep(2) driver.find_element_by_id('loginname').clear() driver.find_element_by_id('loginname').send_keys('用户名') driver.find_element_by_name('password').send_keys('密码') time.sleep(2) driver.find_element_by_css_selector('.W_btn_a.btn_32px').click() time.sleep(4) driver.find_element_by_xpath("//*[@id='Pl_Core_CustTab__2']/div/div/table/tbody/tr/td[1]/a").click() time.sleep(2) driver.set_page_load_timeout(10) driver.set_script_timeout(10) #默认读取15页,少于15页,多次输出最后一篇文章信息 for j in range (1,15): js="var q=document.documentElement.scrollTop=100000" driver.execute_script(js) time.sleep(2) artical = "//*[@id='Pl_Official_MyProfileFeed__23']/div/div" time.sleep(2) wait = ui.WebDriverWait(driver,10,0.1) for i in range (2,len(artical)-1): js="var q=document.documentElement.scrollTop=100000" driver.execute_script(js) time_x_path = "//*[@id='Pl_Official_MyProfileFeed__23']/div/div[{}]/div[1]/div[3]/div[2]/a[1]".format(i) content_x_path = "//*[@id='Pl_Official_MyProfileFeed__23']/div/div[{}]/div[1]/div[3]/div[4]".format(i) read_x_path = "//*[@id='Pl_Official_MyProfileFeed__23']/div/div[{}]/div[2]/div/ul/li[1]/a/span/span/i".format(i) try: add_time = wait.until(lambda driver: driver.find_element_by_xpath(time_x_path)).text abstract = wait.until(lambda driver: driver.find_element_by_xpath(content_x_path)).text page_views = wait.until(lambda driver: driver.find_element_by_xpath(read_x_path)).text except : print('exception happend') print("{}\t{}\n{}".format(add_time,abstract,page_views)) next_page1 = "/html/body/div[1]/div/div[2]/div/div[2]/div[2]/div[4]/div/div[47]/div/a" next_page2 = "//*[@id='Pl_Official_MyProfileFeed__23']/div/div[47]/div/a[2]" if (driver.find_element_by_xpath(next_page1).text == "下一页"): next_page=driver.find_element_by_xpath(next_page1) elif (driver.find_element_by_xpath(next_page2).text == "下一页"): next_page=driver.find_element_by_xpath(next_page2) print(next_page.text) next_page.click() time.sleep(1)
运行结果:
结果可能有异常,是因为微博不是公开的,所以获取不到。另:页数控制没有找到好的方法,就用手设定死的循环次数,根据自己的数量修改。